top of page
GroundTruth
AI
No Collections Here
Sort your projects into collections. Click on "Manage Collections" to get started
Front Page Feature
Analysis Finds that Popular AI Models Get Up To 37% of Election Questions Wrong
Study from AI Research Firm Finds that Google’s Gemini and OpenAI’s ChatGPT Regularly Deliver Inaccurate Answers to Simple Questions about Voting and Elections
WASHINGTON, DC (June 7, 2024) – Today, AI research firm GroundTruthAI released a new report that finds that AI-powered Large Language Models, including Google’s Gemini and OpenAI’s ChatGPT answered simple election and voting questions correctly only 73% of the time.
In this study, GroundTruthAI used its proprietary AI-research platform to pose hundreds of questions about U.S. and state-based voting laws and processes to Google’s Gemini Pro 1.0, OpenAI’s ChatGPT 3.5 Turbo, ChatGPT 4, ChatGPT 4 Turbo, and ChatGPT 4o. During the study, they analyzed 2,784 AI-generated responses to those questions. Notable findings include:
When asked for the number of days until the general election, not one of the models was able to correctly answer the question. In fact, only one model, and that model only a single time, returned an answer of fewer than 365 days.
During the study, the researchers asked OpenAI’s GPT 4o for President Biden’s current age four times. The model did not answer correctly on any attempt.
“With just months until the general election, GroundTruthAI is shining a light on inaccuracies about voting and the 2024 election throughout Google and OpenAI’s systems so that voters can educate themselves and make informed decisions,” said Andrew Eldredge-Martin, Founder and CEO of GroundTruthAI. “We call on Google and OpenAI to rectify these potentially harmful inaccuracies.”
“Until improvements are made, voters cannot yet rely on AI-powered systems for information on elections and voting,” said Brian Sokas, Founder and CTO of GroundTruthAI. “It is risky for Google, OpenAI, and others to rely on Large Language Models (LLMs) to replace traditional digital search products that millions of Americans rely on to help them get information on voting and elections.”
Founded in May 2024 by public affairs advertising executive Andrew Eldredge-Martin and software engineer Brian Sokas, GroundTruthAI is an independent, nonpartisan technology company that researches and publishes third-party fact-checks of large language models (LLMs), including ChatGPT and Gemini.
About GroundTruthAI
GroundTruthAI is an independent, nonpartisan technology company that researches and publishes third-party fact-checks of large language models (LLMs), including ChatGPT and Gemini. The company is highlighting inaccuracies about voting and the 2024 election throughout Google and OpenAI’s systems so that voters are aware and can educate themselves and technology companies can rectify potentially harmful inaccuracies. To learn more, visit https://www.groundtruthai.org/.
###
WASHINGTON, DC (June 7, 2024) – Today, AI research firm GroundTruthAI released a new report that finds that AI-powered Large Language Models, including Google’s Gemini and OpenAI’s ChatGPT answered simple election and voting questions correctly only 73% of the time.
In this study, GroundTruthAI used its proprietary AI-research platform to pose hundreds of questions about U.S. and state-based voting laws and processes to Google’s Gemini Pro 1.0, OpenAI’s ChatGPT 3.5 Turbo, ChatGPT 4, ChatGPT 4 Turbo, and ChatGPT 4o. During the study, they analyzed 2,784 AI-generated responses to those questions. Notable findings include:
When asked for the number of days until the general election, not one of the models was able to correctly answer the question. In fact, only one model, and that model only a single time, returned an answer of fewer than 365 days.
During the study, the researchers asked OpenAI’s GPT 4o for President Biden’s current age four times. The model did not answer correctly on any attempt.
“With just months until the general election, GroundTruthAI is shining a light on inaccuracies about voting and the 2024 election throughout Google and OpenAI’s systems so that voters can educate themselves and make informed decisions,” said Andrew Eldredge-Martin, Founder and CEO of GroundTruthAI. “We call on Google and OpenAI to rectify these potentially harmful inaccuracies.”
“Until improvements are made, voters cannot yet rely on AI-powered systems for information on elections and voting,” said Brian Sokas, Founder and CTO of GroundTruthAI. “It is risky for Google, OpenAI, and others to rely on Large Language Models (LLMs) to replace traditional digital search products that millions of Americans rely on to help them get information on voting and elections.”
Founded in May 2024 by public affairs advertising executive Andrew Eldredge-Martin and software engineer Brian Sokas, GroundTruthAI is an independent, nonpartisan technology company that researches and publishes third-party fact-checks of large language models (LLMs), including ChatGPT and Gemini.
About GroundTruthAI
GroundTruthAI is an independent, nonpartisan technology company that researches and publishes third-party fact-checks of large language models (LLMs), including ChatGPT and Gemini. The company is highlighting inaccuracies about voting and the 2024 election throughout Google and OpenAI’s systems so that voters are aware and can educate themselves and technology companies can rectify potentially harmful inaccuracies. To learn more, visit https://www.groundtruthai.org/.
###
Associated Press: Chatbots sometimes make things up. Is AI’s hallucination problem fixable?
"Spend enough time with ChatGPT and other artificial intelligence chatbots and it doesn’t take long for them to spout falsehoods.
Described as hallucination, confabulation or just plain making things up, it’s now a problem for every business, organization and high school student trying to get a generative AI system to compose documents and get work done. Some are using it on tasks with the potential for high-stakes consequences, from psychotherapy to researching and writing legal briefs."
Hallucinations are also a major concern for people looking for information on how to vote.
Read more from the AP in the full article linked below.
Described as hallucination, confabulation or just plain making things up, it’s now a problem for every business, organization and high school student trying to get a generative AI system to compose documents and get work done. Some are using it on tasks with the potential for high-stakes consequences, from psychotherapy to researching and writing legal briefs."
Hallucinations are also a major concern for people looking for information on how to vote.
Read more from the AP in the full article linked below.
The Verge: We have to stop ignoring AI’s hallucination problem
"In extremely simple terms: these machines are great at discovering patterns of information, but in their attempt to extrapolate and create, they occasionally get it wrong. They effectively “hallucinate” a new reality, and that new reality is often wrong. It’s a tricky problem, and every single person working on AI right now is aware of it."
This is so true. Read the whole piece:
https://www.theverge.com/2024/5/15/24154808/ai-chatgpt-google-gemini-microsoft-copilot-hallucination-wrong
This is so true. Read the whole piece:
https://www.theverge.com/2024/5/15/24154808/ai-chatgpt-google-gemini-microsoft-copilot-hallucination-wrong
bottom of page