Exclusive Content:

Haiper steps out of stealth mode, secures $13.8 million seed funding for video-generative AI

Haiper Emerges from Stealth Mode with $13.8 Million Seed...

“Revealing Weak Infosec Practices that Open the Door for Cyber Criminals in Your Organization” • The Register

Warning: Stolen ChatGPT Credentials a Hot Commodity on the...

VOXI UK Launches First AI Chatbot to Support Customers

VOXI Launches AI Chatbot to Revolutionize Customer Services in...

AI Chatbots Misrepresent News Nearly 50% of the Time, According to Major New Study

New Study Reveals AI Assistants Misrepresent News Content 45% of the Time

Findings from 22 Media Organizations Highlight Systemic Issues in AI Responses

Alarming Findings: Major Study Reveals AI Assistants Misrepresent News Nearly Half the Time

A landmark study involving 22 public service media organizations, including Deutsche Welle (DW), has unveiled shocking statistics regarding the accuracy of commonly used AI assistants. According to the research, four popular AI tools misrepresent news content 45% of the time—a stark reminder of the limitations of artificial intelligence in journalism.

A Collaborative Effort

This study brought together journalists from esteemed public service broadcasters like the BBC (UK) and NPR (USA) to evaluate responses from four AI assistants: ChatGPT, Microsoft’s Copilot, Google’s Gemini, and Perplexity AI. The media organizations employed rigorous criteria—such as accuracy, sourcing, appropriateness of context, editorializing ability, and distinguishing fact from opinion—to scrutinize responses.

Key Findings

The high level of misrepresentation was alarming:

  • 45% of answers contained at least one significant issue.
  • 31% of responses had serious sourcing problems.
  • 20% of answers included major factual errors.

DW specifically noted that 53% of answers to its questions had significant issues, with 29% tied directly to accuracy. Examples of factual discrepancies included the incorrect naming of Olaf Scholz as Germany’s Chancellor and misattribution of NATO’s Secretary General.

The Growing Use of AI for News

As AI assistants become increasingly integral to how people access information, the implications of these findings are grave. According to the Reuters Institute’s Digital News Report 2025, 7% of online news consumers currently rely on AI chatbots for news, rising to 15% among those aged under 25.

Jean Philip De Tender, deputy director general of the European Broadcasting Union (EBU), expressed concern over the study’s conclusions, stating that the systematic distortion of news content jeopardizes public trust. "When people don’t know what to trust, they end up trusting nothing at all, which can deter democratic participation," he emphasized.

An Unprecedented Examination

This study is one of the largest research projects of its kind, replicating methodologies used in a prior study conducted by the BBC in February 2025. While there have been minor improvements since then, the results indicate a persistent high level of error across all four AI assistants.

Insights from the Study

The study involved evaluating 3,000 AI responses to frequently asked news questions. Journalists reviewed the responses against their expertise, unaware of which assistant provided them. Notably, Gemini was identified as the poorest performer, with a staggering 72% of its responses showing significant sourcing issues.

Calls for Action

In light of these findings, the participating media organizations are urging national governments to take decisive action. They are advocating for the enforcement of existing laws surrounding information integrity and media pluralism. The EBU is collaborating on a campaign titled “Facts In: Facts Out,” which calls for AI companies to be accountable for how their products handle and redistribute news content.

The statement from the organizers underscores a vital demand: "If facts go in, facts must come out." This highlights the need for AI tools to maintain the integrity of the news they utilize.

Conclusion

As AI technology continues to evolve and permeate our access to information, the findings from this study serve as a crucial reminder of the responsibility that comes with such advancements. Addressing the significant issues of misinformation and ensuring the integrity of news is essential for maintaining public trust and promoting a well-informed society.

In an era where news consumption increasingly occurs through automated channels, stakeholders in journalism must work diligently to address these challenges, ensuring that AI assists rather than undermines public discourse.

Latest

Exploitation of ChatGPT via SSRF Vulnerability in Custom GPT Actions

Addressing SSRF Vulnerabilities: OpenAI's Patch and Essential Security Measures...

This Startup Is Transforming Touch Technology for VR, Robotics, and Beyond

Sensetics: Pioneering Programmable Matter to Digitize the Sense of...

Leveraging Artificial Intelligence in Education and Scientific Research

Unlocking the Future of Learning: An Overview of Humata...

European Commission Violates Its Own AI Guidelines by Utilizing ChatGPT in Public Documents

ICCL Files Complaint Against European Commission Over Generative AI...

Don't miss

Haiper steps out of stealth mode, secures $13.8 million seed funding for video-generative AI

Haiper Emerges from Stealth Mode with $13.8 Million Seed...

VOXI UK Launches First AI Chatbot to Support Customers

VOXI Launches AI Chatbot to Revolutionize Customer Services in...

Investing in digital infrastructure key to realizing generative AI’s potential for driving economic growth | articles

Challenges Hindering the Widescale Deployment of Generative AI: Legal,...

Microsoft launches new AI tool to assist finance teams with generative tasks

Microsoft Launches AI Copilot for Finance Teams in Microsoft...

4 Key Privacy Concerns of AI Chatbots and How to Address...

The Rise of AI-Powered Chatbots: Benefits and Privacy Concerns Understanding the Impact of AI Chatbots in Various Sectors The Advantages of AI Chatbots for Organizations Navigating Privacy...

Is Your Chatbot Experiencing ‘Brain Rot’? 4 Signs to Look For

Understanding AI's "Brain Rot": How Junk Data Impacts Performance and What Users Can Do About It Key Takeaways from ZDNET Recent research reveals that AI models...

UNL Introduces Its AI Chatbot ‘Cornelius,’ and It’s Gaining Popularity!

University of Nebraska-Lincoln Launches AI Chatbot "Cornelius" for Student Support Meet Cornelius: UNL’s New AI Chatbot Revolutionizing Student Support Last Monday marked an exciting milestone for...