Why AI could be a big problem for the 2024 presidential election
A DYSTOPIAN WORLD fills the frame of the 32-second video. China’s armed forces invade Taiwan. The action cuts to shuttered storefronts after a catastrophic banking collapse and San Francisco in a military lockdown. “Who’s in charge here? It feels like the train is coming off the tracks,” a narrator says as the clip ends.
Anyone who watched the April ad on YouTube could be forgiven for seeing echoes of current events in the scenes. But the spliced news broadcasts and other footage came with a small disclaimer in the top-left corner: “Built entirely with AI imagery.” Not dramatized or enhanced with special effects, but all-out generated by artificial intelligence.
The ad spot, produced by the Republican National Committee in response to President Joe Biden’s reelection bid, was an omen. Ahead of the next American presidential election, in 2024, AI is storming into a political arena that’s still warped by online interference from foreign states after 2016 and 2020.
Experts believe its influence will only worsen as voting draws near. “We are witnessing a pivotal moment where the adversaries of democracy possess the capability to unleash a technological nuclear explosion,” says Oren Etzioni, the former CEO of and current advisor to the nonprofit AI2, a US-based research institute focusing on AI and its implications. “Their weapons of choice are misinformation and disinformation, wielded with unparalleled intensity to shape and sway the electorate like never before.”
Regulatory bodies have begun to worry too. Although both major US parties have embraced AI in their campaigns, Congress has held several hearings on the tech’s uses and its potential oversight. This summer, as part of a crackdown on Russian disinformation, the European Union asked Meta and Google to label content made by AI. In July, those two companies, plus Microsoft, Amazon, and others, agreed to the White House’s voluntary guardrails, which includes flagging media produced in the same way.
It’s possible to defend oneself against misinformation (inaccurate or misleading claims) and targeted disinformation (malicious and objectively false claims designed to deceive). Voters should consider moving away from social media to traditional, trusted sources for information on candidates during the election season. Using sites such as FactCheck.org will help counter some of the strongest distortion tools. But to truly bust a myth, it’s important to understand who—or what—is creating the fables.
A trickle to a geyser
As misinformation from past election seasons shows, political interference campaigns thrive at scale—which is why the volume and speed of AI-fueled creation worries experts. OpenAI’s ChatGPT and similar services have made generating written content easier than ever. These software tools can create ad scripts as well as bogus news stories and opinions that pull from seemingly legitimate sources.
“We’ve lowered the barriers of entry to basically everybody,” says Darrell M. West, a senior fellow at the Brookings Institution who writes regularly about the impacts of AI on governance. “It used to be that to use sophisticated AI tools, you had to have a technical background.” Now anyone with an internet connection can use the technology to generate or disseminate text and images. “We put a Ferrari in the hands of people who might be used to driving a Subaru,” West adds.
Political campaigns have used AI since at least the 2020 to identify fundraising audiences and support get-out-the-vote efforts. An increasing concern is that the more advanced iterations could also be used to automate robocalls with a robotic impersonation of the candidate supposedly on the other end of the line.
At a US congressional hearing in May, Sen. Richard Blumenthal of Connecticut played an audio deepfake his office made—using a script written by ChatGPT and audio clips from his public speeches—to illustrate AI’s efficacy and argue that it should not go unregulated.
At that same hearing, OpenAI’s own CEO, Sam Altman, said misinformation and targeted disinformation, aimed at manipulating voters, were what alarmed him most about AI. “We’re going to face an election next year and these models are getting better,” Altman said, agreeing that Congress should institute rules for the industry.
Monetizing bots and manipulation
AI may appeal to campaign managers because it’s cheap labor. Virtually anyone can be a content writer—as in the case of OpenAI, which trained its models by using underpaid workers in Kenya. The creators of ChatGPT wrote in 2019 that they worried about the technology lowering the “costs of disinformation campaigns” and supporting “monetary gain, a particular political agenda, and/or a desire to create chaos or confusion,” though that didn’t stop them from releasing the software.
Algorithm-trained systems can also assist in the spread of disinformation, helping code bots that bombard voters with messages. Though the AI programming method is relatively new, the technique as a whole is not: A third of pro-Trump Twitter traffic during the first presidential debate of 2016 was generated by bots, according to an Oxford University study from that year. A similar tactic was also used days before the 2017 French presidential election, with social media imposters “leaking” false reports about Emmanuel Macron.
Such fictitious reports could include fake videos of candidates committing crimes or making made-up statements. In response to the recent RNC political ad against Biden, Sam Cornale, the Democratic National Committee’s executive director, wrote on X (formerly Twitter) that reaching for AI tools was partly a consequence of the decimation of the Republican “operative class.” But the DNC has also sought to develop AI tools to support its candidates, primarily for writing fundraising messages tailored to voters by demographic.
The fault in our software
Both sides of the aisle are poised to benefit from AI—and abuse it—in the coming election, continuing a tradition of political propaganda and smear campaigns that can be traced back to at least the 16th century and the “pamphlet wars.” But experts believe that modern dissemination strategies, if left unchecked, are particularly dangerous and can hasten the demise of representative governance and fair elections free from intimidation.
“What I worry about is that the lessons we learned from other technologies aren’t going to be integrated into the way AI is developed,” says Alice E. Marwick, a principal investigator at the Center for Information, Technology, and Public Life at the University of North Carolina at Chapel Hill.
AI often has biases—especially against marginalized genders and people of color—that can echo the mainstream political talking points that already alienate those communities. AI developers could learn from the ways humans misuse their tools to sway elections and then use those lessons to build algorithms that can be held in check. Or they could create algorithmic tools to verify and fight the false-info generators. OpenAI predicted the fallout. But it may also have the capacity to lessen it.
Read more about life in the age of AI:
• The logic behind AI chatbots is surprisingly basic
• Will we ever be able to trust health advice from an AI?
Or check out all of our PopSci+ stories.