UK PM Rishi Sunak fake video

Voter beware: The era of deep fake democracy

As over 50 countries prepare to vote, AI-generated disinformation looms large.

Generative artificial intelligence has been increasingly utilized in political campaigns worldwide. For instance, in Slovakia, just days before the presidential election, an AI-created audio recording featuring fake voices of several candidates discussing election fraud went viral. In the UK, a deep fake ad of Prime Minister Rishi Sunak reached hundreds of thousands on Facebook. In the U.S., many Democratic Party voters received calls featuring a cloned voice of President Joe Biden. TikTok saw a network of accounts impersonating media outlets using voice clones, including that of former President Barack Obama. Similar incidents have occurred in India, Nigeria, Sudan, Taiwan, Moldova, South Africa, Bangladesh, and Ethiopia.
The technology for creating these deep fakes has existed for years but has become more realistic and accessible recently. The cost of producing these fakes is now negligible, and distinguishing between truth and lies has become increasingly difficult. With more than 50 countries voting in 2024, regulators are rushing to draft legislation to limit the use of AI in creating fake text, audio, and video. However, technology is evolving faster than regulatory efforts, creating a dangerous vacuum.
2 View gallery
סרטון מזויף של ראש ממשלת בריטניה רישי סונאק 1
סרטון מזויף של ראש ממשלת בריטניה רישי סונאק 1
UK PM Rishi Sunak fake video
(Photo: YouTube screenshot)
While not all uses of generative AI are harmful—in fact, AI positively impacted recent elections in India by making President Modi more accessible to voters speaking different languages—the tools for spreading disinformation pose significant risks. The key issue lies in the distribution of synthetic content, not necessarily its creation. If used ethically, AI could herald a new era in representative government, but the "if" is critical and not to be underestimated.
The struggle against misinformation
The fear of being inundated with misleading information designed to deceive voters is pervasive. The U.S. government has mandated that companies must develop and label synthetic content. In Israel, the police have sought tools to detect video editing and deep fakes. The European Commission has urged major social networks to label AI-generated content. Twenty technology companies signed an agreement in Munich to combat deceptive AI use in elections, with OpenAI, Google, and Microsoft delaying the release of voice reproduction tools due to concerns over election-related deep fakes. Academics and industry leaders have also called for government regulation to curb the spread of deep fakes.
However, these measures are often non-binding and lack penalties for non-compliance. As technology advances, the threat grows, with tools to detect AI-generated content becoming less reliable. The companies developing these AI models have little incentive to create detection tools, leaving the field open to manipulation.
TrueMedia.org's efforts
Dr. Oren Etzioni, founding CEO of the Allen Institute for Artificial Intelligence, established TrueMedia.org to help people discern real from fake content. TrueMedia has developed about 15 models to identify AI-created audio, video, and images, achieving around 90% accuracy. However, common sense and source verification remain essential.
The ongoing technological arms race between detection tools and AI companies striving to create indistinguishable synthetic content complicates the issue. The fear of deep fakes can also be weaponized to discredit genuine content, adding another layer of complexity.
2 View gallery
מוסף שבועי 21.6.18 תקציר מנהלים אורן עציוני יזם הייטק
מוסף שבועי 21.6.18 תקציר מנהלים אורן עציוני יזם הייטק
Dr. Oren Etzioni, founder, TrueMedia.org
(Photo: Amit Shaal)
"The system works with about 90% accuracy, so that means it will definitely make mistakes in certain situations," Etzioni explained to Calcalist, saying that despite the importance of developing the tools - the person is at the center: "In the end, it is impossible without common sense and that people also check the sources of information. There is no absolute answer."
Active vs. Passive Detection
TrueMedia's passive approach, identifying fakes after creation, often comes too late. A complementary active approach, embedding digital "noise" during creation to mark content as synthetic, faces challenges due to conflicting interests. Companies like Google and Meta, which own both AI models and social networks, have little motivation to enforce stricter measures that might hinder their business models.
While projects like Etzioni's offer some hope, they are limited. OpenAI announced a tool to identify images created by its DALL-E 3 generator but only made it available to a limited group of researchers. Google and Meta have joined efforts to develop a standard for content provenance and authenticity, but much work remains.
The rapid advancement of AI technology poses significant challenges for ensuring the integrity of elections worldwide. While efforts are being made to develop detection tools and establish regulations, the pace of technological development outstrips these initiatives. Vigilance, ethical use of AI, and comprehensive regulatory frameworks are crucial to safeguarding democratic processes in the age of deep fakes.