Election deception 2.0: AI and misinformation redefine US campaign landscape
Election deception 2.0: AI and misinformation redefine US campaign landscape
As AI deepfakes and misinformation flood social media, the battle over truth takes center stage in the 2024 U.S. election.
In 2016, a precedent-setting event took place in world politics. Russia exploited Facebook's dominance to interfere with the U.S. presidential election through fake accounts and posts designed to sow division and mistrust. When this interference was revealed—along with Facebook's failure to address it in real time—it sparked a national scandal, resulting in congressional hearings and policy changes for social media companies.
Eight years later, the 2016 election efforts appear almost quaint compared to the tactics in play today. Divisive, poorly spelled posts have given way to sophisticated influencers producing and distributing content on a massive scale, often employing artificial intelligence to create deepfake texts, images, and videos. Russia, Iran, and China have adopted these tactics, but the real danger may lie elsewhere. In a twist of irony, one of the main purveyors of misinformation is Donald Trump himself. His platform of choice, X (formerly Twitter), has become a breeding ground for lies, bolstered by the support of its owner, Elon Musk. Amid this, countless users share false content, sometimes out of confusion, sometimes out of intent to deceive, or simply to create chaos.
Targeting audiences with precision
In the current election cycle, fake news, deepfakes, and conspiracies have taken center stage. Their role is likely to become even more significant after November 5, when battles over vote counts in swing states commence. Shaping public perception of the election results could be pivotal in determining who ultimately claims the White House.
Distortion of reality has become a driving force in political discourse. At rallies and in interviews, Trump and his supporters regularly disseminate misinformation, compelling news organizations to allocate substantial resources to debunking their claims. This misinformation extends to social media, which has become a production line for fake news. Recent examples include AI-generated photos of Taylor Swift fans purportedly supporting Trump, a deepfake video from Russia showing Kamala Harris making inflammatory remarks, and fake accounts posing as American students to incite campus unrest.
The reach of this content is amplified by a well-organized network across social media platforms, Telegram, and fake news sites, supported by coordinated efforts from Russia, China, and Iran. Generative AI enables the rapid creation of fake content tailored to specific audiences (such as an Iranian-made fake news site targeting Arab-Americans in Michigan). Meanwhile, major tech companies have scaled back their efforts to combat fake news, with X even actively aiding in its spread, giving malicious actors greater freedom to operate.
In August, a stark example highlighted how AI distorts reality. After Vice President Harris landed in Detroit, her campaign shared a photo showing a crowd of 15,000 supporters. Trump’s followers, including Trump himself, alleged the image was AI-generated. Despite evidence from multiple sources confirming the large crowd, this claim persisted. The existence of AI has thus led to widespread doubt, allowing people to question even verified events.
“Our own facts and data”
These tactics will likely peak not on November 5, but in the days that follow. This is expected to be one of the closest U.S. elections, possibly decided by a few thousand votes across a handful of states. The true contest will be over the vote count, certification of electors, and ensuing legal battles. In this struggle, public perception of who won could matter as much as the actual vote count.
A pivotal moment in the shaping of today’s approach to fake news came on January 22, 2017. Following exaggerated claims about the attendance at Trump’s inauguration, Kellyanne Conway, a senior advisor to Trump, famously defended the false numbers as “alternative facts.” This concept quickly became a cornerstone of political rhetoric on the American right, underpinning an argument that everyone is entitled to their own “facts and data.”
Today, this concept has taken on a new dimension. With generative AI, those seeking to manipulate reality no longer need to rely on “alternative facts”; they can now create credible, AI-generated evidence to support their narratives. It’s a world where individuals can live in separate realities, each bolstered by images, audio clips, and videos that are nearly indistinguishable from actual proof, leaving little means to tell one reality from another.
If Trump wins on November 5, this trend may only accelerate, with the mechanisms for creating alternate realities potentially directed straight from the White House.