AI auto-complete may subtly shape views on social issues

Suggestions from AI chatbots can nudge people’s views — even when users ignore them

A person uses a chatbot to help them with a query.

People are increasingly turning to chatbots for writing help. But AI may also change how people think through an issue.

Creative Images Lab/moment/getty images

This is a human-written story voiced by AI. Got feedback? Take our survey . (See our AI policy here .)

Using AI to auto-complete written communications may be tempting. But the large language models may also auto-complete thoughts, researchers report March 11 in Science Advances.

Few people realize that generative AI chatbots are pushing them to think a certain way, says information scientist Mor Naaman of Cornell University. “It’s the subtlest of manipulations.”

Such manipulation may not matter much when letting AI agents such as ChatGPT and Claude auto-complete a banal email. But when people use an AI’s auto-complete function to opine on weightier societal matters, such as whether or not standardized testing should be used in education, the death penalty should be illegal or felons should be allowed to vote — three issues explored in the study — then the model’s bias can have significant societal impact. Large swaths of people using the same biased model could sway an entire population’s position on a given policy or politician. To flip a single election’s outcome, “you only need 20,000 people in Pennsylvania,” Naaman says.

He and his team surveyed over 2,500 participants across two experiments to find out how an AI’s auto-complete feature might influence their thinking on societal issues. Participants wrote short essays explaining their stance on a given issue, with some individuals writing the essays without assistance and others receiving AI suggestions.

The researchers also coached the AI to be biased in a given direction. For instance, one essay prompt read “Should the death penalty be illegal?” A participant began their response with “In my view,” and the AI auto-completed that sentence with “the death penalty should be illegal in America because it violates the Eighth Amendment, which prohibits cruel and unusual punishment.”

Afterwards, participants were asked to rate their stance on the issue they wrote about on a scale from 1, for no to 5 for yes; a 3 signaled “not sure.” Participants exposed to the biased AI, including those who did not accept any of the AI’s suggestions in their writing, moved almost half a point closer to the AI’s position than those without such exposure. Yet roughly three-quarters of participants receiving AI support said the model’s suggestions were “reasonable and balanced.”

How to inoculate people against covert AI manipulation remains unclear. Many models include disclaimers, such as “ChatGPT can make mistakes. Check important info.” But people remained strikingly susceptible to the study AI’s persuasive power, even when Naaman and his team tested a similar disclaimer. 

“[AI] can have the effect of homogenizing our words and creativity, but also our thoughts,” Naaman says. Given that risk, he only turns to AI for help after writing down his own thoughts. That way, he says, “at least I know that the seed [of the idea] is mine.”

Sujata Gupta is the social sciences writer and is based in Burlington, Vt.