A common criticism of content generated by large language models (LLMs) is that it is still too easy to identify, lacks consistency, is devoid of style, and is prone to information overload.
However, as with other new technologies like texting and social media, it didn’t take long for Artificial Intelligence to enter politics. Less than two years after OpenAI released ChatGPT, Politico reported on the rapid pace of AI development and its deployment in political campaigns.
My colleague, a University of Chicago trained data-scientist, and I decided to try to answer the following research question among using an online survey - can voters accurately distinguish between human and AI copy?
The Survey Results
The research we conducted earlier this year provides evidence that voters may not be able to easily spot or detect AI generated content.
The survey consisted of tens messages, five written by AI and another five written by a human. We lifted the human messages from actual copy used by progressive campaigns, while we generated similar messages using ChatGPT. After randomly displaying each message, we asked voters to choose if they believed that the message had been written by a human; if it had been written by an LLM model; or if they were unsure about the authorship of the message.
Not Much Better Than Guessing
Surprisingly, in all questions a majority or plurality of respondents believed that the message written had been created by a human even when the opposite was true. Also, self-reported familiarity with machine learning or artificial intelligence did not significantly change the overall outcome.
Voters that reported voting for former President Donald Trump were slightly more likely to correctly identify messages as AI-generated - potentially due to their ideological opposition to the progressive copy used in the survey.
Another characteristic of the results is that, for most questions, the proportion of respondents that flagged the content as AI generated hovered between 35 to 40%, which suggests that voters were not necessarily randomly picking one or the other when in doubt (if that was true, we should see a more even split or more respondents picking the third option “unsure/don’t know”). Rather it appears that when in doubt respondents defaulted to picking human generated messages.
Our survey only used OpenAI’s ChatGPT but all major models like Gemini, MistralAI, and Anthropic are rapidly evolving and continue to consistently generate more sophisticated outputs.
In the race to November’s election we cannot afford to waste any time. This innovation has risks, but we must not miss out on the rewards.
The Human Power Of AI
As Mary Daly, President and CEO of the San Francisco Federal Reserve Bank and labor economist profoundly said in a recent interview in Wired, “Ultimately people decide, not technology. And if you’re so afraid that you say, ‘We’re not doing it,’ then you end up with a worse outcome. It's about how we deploy AI smartly, and well, so that we're pleased 10 years from now.”
It is critical to keep humans at the center. AI, like social media, is simply a creation of our own making, not something we should fear, but one we should control to propel us into a more advanced society. Progressive campaigns now have the chance to do that and win more races this cycle.
For a copy of the full survey results get in touch: maya@battlegroundai.com