AI deepfakes fail to disrupt elections in 2024
AI deepfakes fail to disrupt elections in 2024
Despite fears, studies reveal minimal impact of AI-generated disinformation on voter outcomes in key global elections.
A year ago, just before the New Hampshire primary, voters received a phone call from U.S. President Joe Biden—or so it seemed. In reality, it was an automated call that used artificial intelligence (AI) to create a near-perfect imitation of his voice—a deepfake.
The rise of generative artificial intelligence (GenAI) and the growing accessibility of other AI tools in recent years sparked fears that the 2024 U.S. presidential and British prime ministerial elections, along with others around the world, would be flooded with deepfake voices, images, and videos. Many predicted this would blur the line between reality and fiction, truth and lies.
However, while AI-generated content has appeared in election campaigns across the United States, the United Kingdom, France, and the European Union, studies and reports published recently indicate that these fears have not materialized. Fraudulent AI content has played a marginal role at best.
A study by the Alan Turing Institute’s Centre for Technology and Security (CETaS) identified just 27 AI-generated disinformation campaigns related to the UK, French, and European Parliament elections. The study found no evidence that these campaigns significantly influenced election outcomes, as their exposure was limited to a small group of users whose political beliefs already aligned with the narratives being promoted. Similarly, a CETaS study of the U.S. elections concluded that AI-based disinformation had no measurable impact on results and mainly reinforced pre-existing opinions.
A survey conducted in the UK by researchers from the University of Oxford and the Turing Institute found that only 15% of respondents reported exposure to harmful deepfake content, including political propaganda, pornographic materials, and scams. Just 5.7% of Britons encountered political deepfakes. Despite this, public awareness and concern were high, with 94.3% of respondents saying they were very or somewhat concerned about the issue. This combination of high awareness and low circulation likely moderated the impact of AI-generated content on electoral systems.
In the U.S., the News Literacy Project cataloged 945 fake content items shared during the election, of which only 52 (5.5%) were created using GenAI systems. Reports by Microsoft, Meta, Google, and OpenAI on U.S. influence campaigns also revealed limited distribution of AI-generated content and minimal foreign interference from countries like Russia, China, and Iran.
An analysis by the Financial Times of mentions of the terms "deepfake" and "AI-generated" in community comments on X (formerly Twitter) found that such mentions were more associated with the launch of new AI models than with major elections. For example, mentions peaked at 7.5% around the launch of ChatGPT-4 and 4% with Elon Musk’s Grok, compared to just over 1.5% during the UK election on July 4, 2024, and about 2% on the day of the U.S. election in November.
Similar patterns emerged outside the West. In Bangladesh’s elections last January, only 2% of misinformation was deepfake content. Researchers from the University of Texas at Austin observed a "notable absence of AI-generated content" during South Africa’s elections in May.
Much of the AI-generated content during these elections was described as "very crude," often containing identifying marks or logos from the tools used to create it. This suggests it was produced by amateurs rather than professional campaigns. Examples included a deepfake image of Democratic presidential candidate Kamala Harris shaking hands with Stalin under a Soviet flag, a fake video of her addressing a communist rally (both circulated by Trump supporters), an AI-generated image of a girl eating cockroach pizza (intended as a protest against the European Union’s alleged support for insect-based diets), and videos resurrecting deceased politicians for campaigns in Indonesia and India.
A Purdue University study cataloged 1,526 pieces of AI-generated content, finding that 36.4% were created for satire or entertainment purposes, while only 24.2% were aimed at disinformation or political manipulation.
On the other side of the equation, attempts to discredit real content by falsely claiming it was AI-generated have yielded more troubling findings. Researchers from the Institute for Strategic Dialogue (ISD) analyzed 300 posts from X, YouTube, and Reddit between August and October that discussed AI in the context of the U.S. election. They found that in 52% of cases, users failed to identify content sources accurately, often labeling real content as AI-generated and justifying their errors with flawed reasoning or unreliable tools.
A September 2024 Pew Research Center survey revealed that 52% of Americans struggled to distinguish fact from fiction in election-related news—a slight improvement from October 2020, when the figure was 55%.
These findings suggest that while the prevalence and impact of AI-generated content in electoral campaigns remain negligible, the mere awareness of its potential existence has significant implications. "Confusion surrounding AI-based content is real and undermines trust in online sources," CETaS researchers wrote. They noted that AI content has broader implications for democracy, including promoting hate speech against political figures and potentially endangering their safety. Additionally, they highlighted concerns about politicians using AI in campaigns without clear disclosure, which could encourage unethical practices in the future.