Skip to main contentSkip to navigationSkip to navigation
A Facebook logo is seen on an iPhone in this photo
Misinformation rabbit holes serve further to polarize voters and delegitimize the election process. Photograph: NurPhoto/Getty Images
Misinformation rabbit holes serve further to polarize voters and delegitimize the election process. Photograph: NurPhoto/Getty Images

‘Fundamentally dangerous’: reversal of social media guardrails could prove disastrous for 2024 elections

This article is more than 10 months old

Scaling back of moderation and rise of AI are creating the perfect storm to weaken elections and democracy

Increasing misinformation on social media, platforms scaling back content moderation and the rise of AI are converging to create a perfect storm for the 2024 elections that some experts warn could put democracy at risk.

YouTube this week reversed its election integrity policy, allowing content contesting the validity of the 2020 elections to remain on the platform. Meta, meanwhile, reinstated the Instagram account of misinformation super spreader Robert F Kennedy Jr and will allow Donald Trump to post again imminently. Twitter has also allowed Trump to return, and has generally seen a rise in the spread of misinformation since billionaire Elon Musk took over the platform last year.

These trends may prove disastrous for the 2024 elections, and for the health of democracy at large, said Imran Ahmed, chief executive officer of the Center for Countering Digital Hate (CCDH), a non-profit that fights misinformation.

“This is fundamentally dangerous,” he said. “American democracy itself cannot survive wave after wave of disinformation that seeks to undermine democracy, consensus and further polarizes the public.”

YouTube this week reversed a policy banning content that casts doubt on previous election results. Specifically, the platform will no longer remove content that “advances false claims that widespread fraud, errors, or glitches occurred in the 2020 and other past US Presidential elections”.

The policy was instituted in December 2020, when Trump and his supporters sought to delegitimize the election results – a narrative that culminated in the storming of the US Capitol on 6 January 2021. Under the rules, prominent accounts including that of the rightwing former White House strategist Steve Bannon were banned.

Trump supporters stormed the Capitol on January 6 as they believed the election was ‘stolen’. Photograph: Leah Millis/Reuters

YouTube said in a statement on its decision that leaving the policy in place risked “curtailing political speech without meaningfully reducing the risk of violence or other real-world harm”.

“When we craft our policies, we always keep two goals in mind: protecting our community, and providing a home for open discussion and debate,” it said in a blogpost announcing the decision. “These goals are sometimes in tension with each other, and there is perhaps no area where striking a balance is more complex than political speech.”

While it said it “carefully deliberated this change”, the company did not share data or further details on the extent to which the policy had been effective in reducing harm. It said it will provide more details about its policies around the 2024 elections in coming months.

Experts said YouTube’s move highlighted the need for more transparency around moderation decisions ahead of a critical election. YouTube declined a Guardian request for comment on such criticisms and declined to share additional data on the extent to which the previous policy “curtail[ed] political speech”.

YouTube said in its initial statement it removed “tens of thousands” of videos under the policy in the years since it was instated, which “suggests it had a positive effect”, said Theresa Payton, cybersecurity expert and former White House chief information officer. “So the question is why did they make this change?” she said. “I would like to see [YouTube] lead the way and lead the conversation around what their data-driven reasoning was behind tweaking this policy. Transparency is definitely a friend to democracy.”

Election disinformation is particularly harmful on YouTube as its algorithms often suggest related videos to users, further skewing their views. One report found users already skeptical of election results were served three times as many election denial videos as those who were not. YouTube declined to comment on this study.

Such misinformation rabbit holes serve to further polarize voters and delegitimize the election process, said Ahmed of the CCDH, adding that if social media platforms’ enforcement actions are removed, then “the danger will reappear very, very quickly”.

“The real threat now is that we’re going to have an entire electoral cycle dominated by a debate over the legitimacy of elections, leading to a significant and disastrous erosion of the confidence people have in the electoral process,” he said. “Democracy is consensus based and the most important tenet that underpins our democracy is that we accept the results.”

YouTube’s argument for “open discussion” is one that has come to sound familiar in recent months. It is the centerpiece in the reasoning from new Twitter boss Elon Musk, who has called himself a “free speech absolutist”, when it comes to reinstating previously banned accounts. And in allowing Trump to return to Meta platforms, the company’s head of global affairs, Nick Clegg, reasoned that “the public should be able to hear what politicians are saying so they can make informed choices”. Meta spokesperson Andy Stone said Kennedy was reinstated due to being “an active candidate for president of the United States”.

'So you won't take down lies?': Alexandria Ocasio-Cortez challenges Facebook CEO – video

Meta has also long held a policy that exempts political advertisements from its misinformation policies – one that was targeted pointedly by representative Alexandria Ocasio-Cortez in a 2019 congressional hearing. “So, you won’t take down lies or you will take down lies? I think that’s just a pretty simple yes or no,” she asked Mark Zuckerberg.

Platforms have long argued that constituents have a right to hear directly from candidates for office. But anti-misinformation advocates say the lack of enthusiasm for containing harmful political speech is also driven by profit. Former Facebook employee turned whistleblower Frances Haugen testified in 2021 that Meta repeatedly declined to take action against inflammatory misinformation because doing so decreased engagement, and thus advertising revenue, and YouTube has been alleged to run advertisements on misinformation videos with millions of views.

It can lead platforms to want to do “as little as possible to enforce their rules”, said Ahmed. “The economics of this is that every time they take an enforcement action, they reduce potential revenues, because every bit of content is monetizable.”

Artificial intelligence is bringing a fresh layer of alarm for those who have long monitored the misinformation ecosystem. In addition to the concerns present during past elections, doctored images and videos are flooding users’ streams and destabilizing their ability to trust what they read online.

“The use of generative AI is only going to make it easier to warp people’s views further,” said Wasim Khaled, chief executive officer and co-founder of misinformation detection tool Blackbird.AI.

Meta has declined to state whether its exemption on misinformation in political ads will extend to manipulated and AI-generated images in the upcoming elections, concerning political operatives and misinformation watchdogs. Twitter’s policies ban content that has been “significantly and deceptively altered, manipulated, or fabricated ... especially through use of artificial intelligence algorithms” but it has not commented on how that policy relates to political figures. YouTube declined to comment on its policies around AI-generated political ads.

“If we don’t do something now in Silicon Valley, social media platforms and news media as we know it are going to die,” said Payton regarding the rise of artificial intelligence generated misinformation. “Social discourse on every issue is going to be manipulated, and we are going to have people not believing results of elections.”

As concerns mount around the 2024 election cycle, Ahmed called for “a mutual disarmament” agreement on the use of generative AI from both parties. Meanwhile, Democrats have introduced a bill in Congress that would require political ads to disclose the use of artificial intelligence. Experts are also urging platforms to reinstate stricter moderation rules and provide more transparency around changed policies.

“If you are deciding to reinstate a spreader of demonstrably false information or incitement to violence, you owe it to your users to justify those decisions and set those clear red lines,” Ahmed said. “Otherwise, they are just making it up as they go along, and that is fundamentally corrosive because it means that no one really knows the rules.”

The deterioration of the information system also creates a primed environment for malicious actors, including other countries, to further destabilize the US, said Payton. She also warned of potential violence, including what was seen in the January 6 Capitol riots. Further, it may simply leave Americans so divided that they don’t feel the need to vote at all, Payton added.

“My concern is that there will be whole groups of people who become so disenfranchised that they don’t vote at all,” she said. “If you think your vote doesn’t matter because of misinformation and disinformation and you don’t vote, democracy dies.”

When asked about what measures are being taken to combat misinformation ahead of the 2024 elections, why Trump was allowed to return to the platform and whether it has comment on data that shows misinformation on its platform has risen under Musk’s leadership, Twitter replied with a poop emoji.

Meta did not respond to a request for comment.

Most viewed

Most viewed