Social Media Companies and Misinformation

Social media for many has become a vital way to be introduced to news and information. On these platforms things spread fast and cast a wide net across the world. But, with virality being the main way that people can receive information on these platforms in mass, bad actors are bound to appear. We will be looking at the attempts of two major social media companies to curb misinformation and how effective they were or are.

To being we will start with X, formally known as Twitter. X has hard the largest change of any social media platforms after billionaire Elon Musk bought out the company and massively restructured those at the top.

In 2019, Twitter took action to curb misinformation during the campaigning for the 2020 Presidential Election. Jack Dorsey, the CEO of Twitter at the time, claimed it was “protecting the health of the public conversation ahead of the 2020 U.S. elections.” The tool seemed to work very well as it began to purge many pages spending misinformation during this time, they did something similar during the COVID-19 pandemic, something that would be rolled back after Musk’s acquisition of the platform.

The policies purposes were to flag misinformation through both flagging with tags and manual moderation, like most social media platforms. For the two examples provided for instance, they looked for potentially damaging info and purge accounts that are responsible for spreading such. The effectiveness of these policies prior to the Musk-era of X is undoubted. Most media outlets and users now view X as a place where misinformation can fester. Especially since Musk took a very “anti-woke” stance after his purchase of the platform. Musk would assist in the mass-unbanning of many right-wing users spreading misinformation as well as a lot of accounts with ties to white supremacy and antisemitism.

Jack Dorsey, former Twitter CEO. Credit: Biography.com

Twitter’s attempts to curb misinformation was not unique however. Many of the other major players in the social media space around the same time would also make moves to address this issue. Around the time of the 2020 Presidential Election Facebook would also take steps to do the same. Facebook began to crack down on mainly COVID-19 misinformation around this time. They, like X, would also roll back these measures as of 2023.

Mark Zuckerberg, CEO of META. Credit: CNN

Because of these anti-misinformation moderation policies the volume of vaccine misinformation actually went down, showing some from of efficacy in this regard. However the engagement with this content did not necessarily fall with the few pages that remained. Outright banning content is a good way to de-platform harmful ideals, but in order to do that there has to be more of blanket banning which can be hard because it could lead to privacy concerns with personal accounts. So with X and Facebook it shows that being more stringent on their content policy helps to stop misinformation from spreading like wildfire, but again you would have a lot of privacy concerns for users now at on the platforms plate.

Not only that but the now that the dust has settled on and the turbulence of the that election cycle and the pandemic has settled these companies now see as chance to roll back these moderation policies to keep users engaged and appeal to right-wing users which is 47% of Facebook users. So while these platforms had the right idea they have walked back for the personal benefit of their companies showing that the policies were there because of public backlash rather than genuine concern for public knowledge and well being.

Mark Zuckerberg testifying to Congress, 2018. Credit: Guardian News

While X has been a little better in the post-Covid era with community moderation tools such as the “community note” where users can fact check posts and leave a note to other users that verifies legitimacy, both X and Facebook are struggling to reign in the large pool of info and seemingly have no current interest in doing so.

What can be done? Without federal regulation of some sort to force these media companies to have stronger moderation all that can be done is a wild-west like form of community moderation which is impossible for platforms as large as these. What has to be decided is if users are willing to give up more privacy for security against harmful ideologies and information being presented on these platforms. If not, these companies need to massively improve their moderation. They should not just moderate things when there is a public outcry, if they really care about the community and safety of their site there has to remain some consistency on policy.

Leave a comment

Blog at WordPress.com.

Design a site like this with WordPress.com
Get started