New Study Reveals AI Assistants Misrepresent News Content 45% of the Time
Findings from 22 Media Organizations Highlight Systemic Issues in AI Responses
Alarming Findings: Major Study Reveals AI Assistants Misrepresent News Nearly Half the Time
A landmark study involving 22 public service media organizations, including Deutsche Welle (DW), has unveiled shocking statistics regarding the accuracy of commonly used AI assistants. According to the research, four popular AI tools misrepresent news content 45% of the time—a stark reminder of the limitations of artificial intelligence in journalism.
A Collaborative Effort
This study brought together journalists from esteemed public service broadcasters like the BBC (UK) and NPR (USA) to evaluate responses from four AI assistants: ChatGPT, Microsoft’s Copilot, Google’s Gemini, and Perplexity AI. The media organizations employed rigorous criteria—such as accuracy, sourcing, appropriateness of context, editorializing ability, and distinguishing fact from opinion—to scrutinize responses.
Key Findings
The high level of misrepresentation was alarming:
- 45% of answers contained at least one significant issue.
- 31% of responses had serious sourcing problems.
- 20% of answers included major factual errors.
DW specifically noted that 53% of answers to its questions had significant issues, with 29% tied directly to accuracy. Examples of factual discrepancies included the incorrect naming of Olaf Scholz as Germany’s Chancellor and misattribution of NATO’s Secretary General.
The Growing Use of AI for News
As AI assistants become increasingly integral to how people access information, the implications of these findings are grave. According to the Reuters Institute’s Digital News Report 2025, 7% of online news consumers currently rely on AI chatbots for news, rising to 15% among those aged under 25.
Jean Philip De Tender, deputy director general of the European Broadcasting Union (EBU), expressed concern over the study’s conclusions, stating that the systematic distortion of news content jeopardizes public trust. "When people don’t know what to trust, they end up trusting nothing at all, which can deter democratic participation," he emphasized.
An Unprecedented Examination
This study is one of the largest research projects of its kind, replicating methodologies used in a prior study conducted by the BBC in February 2025. While there have been minor improvements since then, the results indicate a persistent high level of error across all four AI assistants.
Insights from the Study
The study involved evaluating 3,000 AI responses to frequently asked news questions. Journalists reviewed the responses against their expertise, unaware of which assistant provided them. Notably, Gemini was identified as the poorest performer, with a staggering 72% of its responses showing significant sourcing issues.
Calls for Action
In light of these findings, the participating media organizations are urging national governments to take decisive action. They are advocating for the enforcement of existing laws surrounding information integrity and media pluralism. The EBU is collaborating on a campaign titled “Facts In: Facts Out,” which calls for AI companies to be accountable for how their products handle and redistribute news content.
The statement from the organizers underscores a vital demand: "If facts go in, facts must come out." This highlights the need for AI tools to maintain the integrity of the news they utilize.
Conclusion
As AI technology continues to evolve and permeate our access to information, the findings from this study serve as a crucial reminder of the responsibility that comes with such advancements. Addressing the significant issues of misinformation and ensuring the integrity of news is essential for maintaining public trust and promoting a well-informed society.
In an era where news consumption increasingly occurs through automated channels, stakeholders in journalism must work diligently to address these challenges, ensuring that AI assists rather than undermines public discourse.