Chatbots spewing facts, and falsehoods, can sway voters
AIs are equally persuasive when they’re telling the truth or lying
People conversing with chatbots about politics find those that dole out facts more persuasive than other bots, such as those that tell good stories. But these informative bots are also prone to lying.
anyaberkut/iStock/Getty Images Plus
Laundry-listing facts rarely changes hearts and minds – unless a bot is doing the persuading.
Briefly chatting with an AI moved potential voters in three countries toward their less preferred candidate, researchers report December 4 in Nature. That finding held true even in the lead-up to the contentious 2024 presidential election between Donald Trump and Kamala Harris, with pro-Trump bots pushing Harris voters in his direction, and vice versa.
The most persuasive bots don’t need to tell the best story or cater to a person’s individual beliefs, researchers report in a related paper in Science. Instead, they simply dole out the most information. But those bloviating bots also dole out the most misinformation.
“It’s not like lies are more compelling than truth,” says computational social scientist David Rand of MIT and an author on both papers. “If you need a million facts, you eventually are going to run out of good ones and so, to fill your fact quota, you’re going to have put in some not-so-good ones.”
Problematically, right-leaning bots are more prone to delivering such misinformation than left-leaning bots. These politically biased yet persuasive fabrications pose “a fundamental threat to the legitimacy of democratic governance,” writes Lisa Argyle, a computational social scientist at Purdue University in West Lafayette, Ind., in a Science commentary on the studies.
For the Nature study, Rand and his team recruited over 2,300 U.S. participants in late summer 2024. Participants rated their support for Trump or Harris out of 100 points, before conversing for roughly six minutes with a chatbot stumping for one of the candidates. Conversing with a bot that supported one’s views had little effect. But Harris voters chatting with a pro-Trump bot moved almost four points, on average, in his direction. Similarly, Trump voters conversing with a pro-Harris bot moved an average of about 2.3 points in her direction. When the researchers re-surveyed participants a month later, those effects were weaker but still evident.
The chatbots seldom moved the needle enough to change how people planned to vote. “[The bot] shifts how warmly you feel” about an opposing candidate, Argyle says. “It doesn’t change your view of your own candidate.”
But persuasive bots could tip elections in contexts where people haven’t yet made up their minds, the findings suggest. For instance, the researchers repeated the experiment with 1,530 Canadians and 2,118 Poles prior to their countries’ 2025 federal elections. This time, a bot stumping in favor of a person’s less favored candidate moved participants’ opinions roughly 10 points in their direction.
For the Science paper, the researchers recruited almost 77,000 participants in the United Kingdom and had them chat with 19 different AI models about more than 700 issues to see what makes chatbots so persuasive.
AI models trained on larger amounts of data were slightly more persuasive than those trained on smaller amounts of data, the team found. But the biggest boost in persuasiveness came from prompting the AIs to stuff their arguments with facts. A basic prompt telling the bot to be as persuasive as possible moved people’s opinions by about 8.3 percentage points, while a prompt telling the bot to present lots of high-quality facts, evidence and information moved people’s opinions by almost 11 percentage points – making it 27 percent more persuasive.
Training the chatbots on the most persuasive, largely fact-riddled exchanges made them even more persuasive on subsequent dialogues with participants.
But that prompting and training comprised the information. For instance, GPT-4o’s accuracy dropped from roughly 80 percent to 60 percent when it was prompted to deliver facts over other tactics, such as storytelling or appealing to users’ morals.
Why regurgitating facts makes chatbots, but not humans, more persuasive remains an open question, says Jillian Fisher, an AI and society expert at the University of Washington in Seattle. She suspects that people perceive humans as more fallible than machines. Promisingly, her research, reported in July at the annual Association for Computational Linguistics meeting in Vienna, Austria, suggests that users who are more familiar with how AI models work are less susceptible to their persuasive powers. “Possibly knowing that [a bot] does make mistakes, maybe that would be a way to protect ourselves,” she says.
With AI exploding in popularity, helping people recognize how these machines can both persuade and misinform is vital for societal health, she and others say. Yet, unlike the scenarios depicted in experimental setups, bots’ persuasive tactics are often implicit and harder to spot. Instead of asking a bot how to vote, a person might just ask a more banal question, and still be steered toward politics, says Jacob Teeny, a persuasion psychology expert at Northwestern University in Evanston, Ill. “Maybe they’re asking about dinner and the chatbot says, ‘Hey, that’s Kamala Harris’ favorite dinner.’”