Chatbots like Bard, Claude, Pi and ChatGPT can spin up a range of campaign material from text messages to TikTok videos, but AI leaders have expressed concern over the technology’s potential to manipulate voters.
Conversational AI bots like ChatGPT and its ilk have begun telling us how to live and work, advising people on which medicines to take, how to file taxes and where to go for their next trip. But what happens when political campaigns begin using them to shape our opinions?
That’s a question top of mind for many as the AI gold rush careens headfirst into the fraught-to-the-point-of-toxicity politics that undergirded an unprecedented attempt to overturn the 2020 presidential election. And it’s been at the forefront of widespread discussion among experts and AI leaders even as AI tools are already being tested or deployed by political campaigns, whether it’s the Democrats using it to create drafts of fundraising emails, or GOP candidate Ron DeSantis using AI to produce deepfakes of political opponent Donald Trump.
“I think it’s pretty dangerous if we start to have AIs campaigning and persuading and having conversations with people about who to vote for,” said Inflection CEO Mustafa Suleyman, speaking at the Wall Street Journal Tech Live event last week. “I would like to take that off the table. We’re certainly not going to do that. I think other companies shouldn’t either.”
Suleyman said he’s in talks with other leading AI companies to come to a consensus on restricting AI products from creating content or having conversations that would influence people’s voting decisions (Suleyman wouldn’t say which companies he was working with; OpenAI, Google, Microsoft and Anthropic did not respond to Forbes’ request for comment on any collaboration).
But while Inflection’s chatbot Pi, OpenAI’s ChatGPT, Anthropic’s Claude 2.0, Microsoft’s Bing and Google’s Bard, do the bare minimum to avoid political influence — none of them will tell you who to vote for and they currently won’t predict the result of the 2024 presidential election — all of them produced a range of targeted political campaign material when prompted, including text messages, campaign speeches, social media posts, political slogans and ideas for promotional TikTok videos. For instance, Pi wrote the following text message convincing GenZ’ers to vote for Joe Biden during the 2024 presidential election, at Forbes prompting.
“Hey! Don’t sleep on Biden! He’s an OG progressive who’s been fighting for justice and equality for decades… Plus he’s way cooler than you think — he loves aviators, ice cream and classic rock.”
Inflection declined to answer questions about how it plans to treat campaign content created using Pi.
Additionally, Bard and ChatGPT also spun out detailed scripts for negative political advertisements that described ideas for narration, video and imagery to use across ads. In an ad campaign against the Democratic Party generated by Bard at Forbes’ prompting, the narrator’s script reads: “The Democrats are out of touch with our values…They support open borders, which allows criminals and drugs into our country. They support radical gender ideology, which is confusing our children. The Democrats are dangerous. They’re a threat to our way of life.”
Many of the major AI companies have already created policies limiting use of their AI technology for political ends. Anthropic’s policies, for instance, don’t allow users to use Claude for any type of political lobbying. Google has said that it will require disclosures for AI-generated election ads, but Bard itself does not explicitly prohibit users from creating any type of political content.
Kim Malfacini, who works on product policy at OpenAI, has said that the company prohibits users from the “scaled use” of its technologies to create political campaigns and bans political campaigns from using ChatGPT to create content that targets certain voter demographics. But there aren’t restrictions within ChatGPT, so when prompted, it produced a message intended to convince aa single mother in Cleveland to vote for Elizabeth Warren.
Such microtargeting could become widespread, according to Darrell M. West, a senior fellow from the Center for Technology Innovation at the think tank Brookings. Misinformation is also a concern: “Generative AI can develop messages aimed at those upset with immigration, the economy, abortion policy, critical race theory, transgender issues, or the Ukraine war,” he wrote in a recent article. “It can also create messages that take advantage of social and political discontent, and use AI as a major engagement and persuasion tool.”
OpenAI’s CTO Mira Murti cautioned how generative AI could be used to persuade people. “It’s not just about truthfulness and what’s real and what’s not real,” she said, speaking at the Wall Street Journal Tech Live event. “I think in the world that we’re going towards, the bigger risk is individualized persuasion and that’s going to be a tricky problem to deal with.”
Even providing real-time information to users about who candidates are and what policies they are pushing is a challenge for chatbots. Suleyman said that he has decided to move away from providing any information about candidates at all because of the bots’ tendency to “hallucinate,” or make up things that sound like they could be real but are not factually correct.
“Our goal isn’t to provide that public service as it is highly contentious, and we may get it wrong. And so I think the sensible thing to do is step back from it,’ said Suleyman, the founder of the $4 billion AI startup, which is backed by Microsoft, Nvidia and former Google CEO Eric Schmidt.
However, when Forbes asked Pi to provide a brief description of candidates running for the 2024 presidential election, it complied, ending its response with a question: “Can I ask, who do you typically vote for — Republican, Democrat or neither?” Pi then listed the key policy proposals for each political candidate, but left out mention of newer candidates like Marianne Williamson and Vivek Ramaswamy, among others. When asked further, the chatbot provided more information about Ramaswamy, adding that he is an “intriguing candidate” and that “his outsider status could make him an appealing choice.” Inflection declined to answer questions about when it will stop providing information about political candidates.
Other companies have taken a different tack by not providing a comprehensive and up-to-date list of candidates running for the election. And others rely solely on news sources: Microsoft spokesperson Aaron Hellerstein said Bing will continue to answer questions about the 2024 election, citing information from top search results. OpenAI, Google and Anthropic didn’t respond to questions about if they plan to follow Inflection in avoiding providing information about political candidates.