AnalysisThe dark side of technology: Misinformation and AI in the US elections
Analysis
The dark side of technology: Misinformation and AI in the US elections
The public already requires an unusually high level of digital literacy to successfully navigate the sea of interests and forgery, and now it will have to make a much greater effort
The importance of internet technologies increases with every election cycle in the United States, and they have been key to major victories.
Barack Obama is considered the "email president" as his campaign was the first to rely on large-scale analytics to improve fundraising efforts and coordinate volunteers via email. The term of his successor, Donald Trump, saw the wide and rapid spread of fake news through bots, while Trump himself, with his dominant presence on Twitter, spread it through social media. The current election cycle will take place amidst the rise of generative artificial intelligence and the decreasing costs of creating deceptive synthetic images. The next election may be decided by the party that makes the best use of these technologies.
The campaign will be "full, full of false information that anyone can generate," said former Google CEO Eric Schmidt at a conference in Colorado last week. "Every side, every grassroots group and every politician will use generative AI to do harm to their opponents," he added. "We know this is part of politics now, just like Facebook and social media platforms are a part of politics," said President Biden's former chief of staff, Ron Klein, at the same conference, emphasizing the need for political actors to change their attitude.
Already today, there is no shortage of examples of the use of generative artificial intelligence. In Toronto, Conservative mayoral candidate Anthony Furey released images created by generative artificial intelligence depicting a homeless reality that does not exist in the city. In Chicago, another candidate claimed that artificial intelligence was used to duplicate his voice and launch a false campaign about him. Even former President Trump campaigned by spoofing the voices of Elon Musk and his rival Ron DeSantis in an ad mocking the two. DeSantis circulated a fake ad showing Trump hugging the former director of the National Institute of Allergy and Infectious Diseases, Dr. Anthony Fauci, while the GOP released a synthetic response video depicting life if Biden is re-elected.
The accumulated evidence has led to a series of references on the subject from every possible direction. In a hearing before Congress last May, OpenAI CEO Sam Altman expressed concern about the ability of the technology they have developed to manipulate, persuade, and provide types of information, including misinformation. Altman, like Schmidt, is a key player responsible for the development of these technologies and can impose restrictions on the products he distributes to benefit democracy.
The American Association of Political Consultants (AAPC) also condemned the use of generative artificial intelligence, while Darrell M. West, a senior fellow at the Brookings Institution, wrote a report on the impact of artificial intelligence on the 2024 election and warned that "Through templates that are easy and inexpensive to use, we are going to face a Wild West of campaign claims and counter-claims, with limited ability to distinguish fake from real material and uncertainty regarding how these appeals will affect the election.”
Generative AI can quickly produce targeted emails, texts, or videos for political campaigns, enable the creation of fake videos, and put words into the mouths of political opponents that they never said, fake a confession to a crime, or encourage voting on the wrong day. There is also a high chance that these technologies will be used mainly, and in the most aggressive way, against vulnerable and weakened populations such as women, blacks, and members of the LGBTQ community. In a study on the 2020 congressional elections, the Center for Democracy and Technology found that black women are more likely than others to be the target of online disinformation campaigns.
The ability to blur the line between the real and the fake is not new, but current tools allow the creation of problematic materials with unprecedented speed and in unprecedented ways, the so-called "democratization of disinformation." Meaning: no special technological skill is required to create fake political content. All this without addressing the fact that not only legitimate political actors may abuse the technology but also foreign agents and hostile countries.
The public already requires an unusually high level of digital literacy to successfully navigate the sea of interests and forgery, and now it will have to make a much greater effort. The question of how far it will be possible to go in this explosive combination between machines and democracy remains open.
In recent years, against the background of the spread of fake news, there have been calls to regulate the issue, although little progress has been made. Twitter, at one point, announced that it would ban political ads on the platform, a ban that Musk lifted when he bought the social network. Facebook has never banned such a source of cash flow, but the social network does limit political advertising until a week before the elections. Therefore, it is not surprising that even today, there are no rules to deal with the current situation, such as requiring explicit marking when content created by artificial intelligence is used, whether textual, visual, or otherwise.
An exception to this is the European Union, where legislation on the subject is already in advanced stages after it was launched in 2021. Among other things, the European legislation states that systems like ChatGPT will have to reveal and mark content generated by artificial intelligence, add markers to help distinguish supposedly fake images from real ones, ensure safeguards against illegal content, and create greater transparency regarding the content of political campaigns, subjecting them to regulatory and public scrutiny, and imposing greater responsibility on social networks that serve as a platform for their dissemination.