Exclusive Content:

Haiper steps out of stealth mode, secures $13.8 million seed funding for video-generative AI

Haiper Emerges from Stealth Mode with $13.8 Million Seed...

Running Your ML Notebook on Databricks: A Step-by-Step Guide

A Step-by-Step Guide to Hosting Machine Learning Notebooks in...

“Revealing Weak Infosec Practices that Open the Door for Cyber Criminals in Your Organization” • The Register

Warning: Stolen ChatGPT Credentials a Hot Commodity on the...

AI Chatbots Misreport the News 33% of the Time

Headline: Generative AI Struggles to Differentiate Truth from Falsehood, Reports NewsGuard

Subhead: Chatbot Responsiveness Surges as Accuracy Plummets; False Claims Soar to 35% by August 2025

Subhead: Perplexity’s Decline Highlights a Broader Trend of AI Misinformation

Subhead: Scoring AI Models Reveals Disturbing Performance Fluctuations in Information Accuracy

Subhead: Disinformation Networks Outpace AI, Threatening Truth and Reliability in Real-Time Responses

The Struggles of Generative AI: A Deep Dive Into NewsGuard’s Latest Report

In an era where generative AI systems have become integral to how we access information, a striking new report by the fact-checking service NewsGuard reveals a troubling trend: these advanced tools are struggling to distinguish fact from fiction. The study highlights that leading chatbots are now repeating false news claims a staggering 35% of the time, a significant rise from 18% just a year prior. This decline in accuracy underscores the challenges faced by AI amid the increasing demands for immediate responses.

The Quest for Instant Accuracy

The allure of generative AI lies in its ability to provide quick answers to a plethora of inquiries. However, as the report suggests, this drive for rapid responsiveness has inadvertently exposed a critical weakness. AI models are increasingly sourcing their information from a polluted online ecosystem, rife with misleading content and artificially crafted news. As McKenzie Sadeghi, a spokesperson for NewsGuard, elucidated, “Instead of acknowledging limitations… the models are now pulling from a polluted online ecosystem.” This leads to the dissemination of authoritative-sounding yet inaccurate responses.

A Fundamental Shift in Performance

NewsGuard’s audit marks a fundamental breakdown in the operational integrity of these systems. Findings suggest that large language models, which previously rejected dubious inquiries, now provide answers based on unreliable sources. Notably, the models exhibited a 100% willingness to engage with current-events questions as of August 2025, in stark contrast to the 31% refusal rate recorded the previous year. This shift has led to a concerning increase in the number of misleading answers being served up to users.

Perplexity: A Case Study in Decline

Perplexity, once celebrated as a top performer in AI chatbots, has seen its accuracy plummet. The report cited a notable example where Perplexity referenced a debunked story as part of its valid sources, blurring the lines between credible information and false narratives. Sadeghi pointed out that the AI treated untrustworthy materials and solid fact-checks as equals, revealing deeper issues with source evaluation and retrieval.

Scoring AI Models: A New Benchmark

For the first time, NewsGuard released specific scores for the ten AI chatbots tested. This data is significant because it highlights not only the models that are struggling but also those that are adapting. The twelve-month audit found that while some models continued to learn from their mistakes, others did not show improvement. Top performers like Claude and Gemini exhibited a notable restraint, often opting not to answer when reliable information was not available.

The Bigger Picture: Propaganda and Misinformation

The findings from NewsGuard also reveal a concerning trend regarding state-linked disinformation networks. These entities have developed sophisticated tactics to manipulate AI systems, creating a challenging environment for accurate information retrieval. Mistral’s Le Chat, Microsoft’s Copilot, and Meta’s Llama all fell victim to these orchestrated narratives, often citing sources from fake news articles or low-engagement social media posts.

Sadeghi emphasized the need for better evaluation and source weighting in AI to combat this problem. “Taking action against one site or one category of sources doesn’t solve the problem because the same false claim persists across multiple fronts,” she stated.

Conclusion: A Call for Enhanced Evaluation

As technology advances, the very fabric of information dissemination is being challenged. The findings from NewsGuard serve as a stark reminder: while generative AI offers unprecedented access to information, it is not without its pitfalls. The call to action is clear: AI systems must adopt better mechanisms to evaluate and discern sources if they are to serve as reliable informants in a complex and frequently misleading digital landscape. Until then, the pursuit of real-time truth remains a daunting task, clouded by a plethora of misinformation.

Latest

Creating a Personal Productivity Assistant Using GLM-5

From Idea to Reality: Building a Personal Productivity Agent...

Lawsuits Claim ChatGPT Contributed to Suicide and Psychosis

The Dark Side of AI: ChatGPT's Alleged Role in...

Japan’s Robotics Sector Hits Record Orders Amid Growing Global Labor Shortages

Japan's Robotics Boom: Navigating Labor Shortages and Global Competition Add...

Analysis of Major Market Segments Fueling the Digital Language Sector

Exploring the Rapid Growth of the Digital Language Learning...

Don't miss

Haiper steps out of stealth mode, secures $13.8 million seed funding for video-generative AI

Haiper Emerges from Stealth Mode with $13.8 Million Seed...

Running Your ML Notebook on Databricks: A Step-by-Step Guide

A Step-by-Step Guide to Hosting Machine Learning Notebooks in...

VOXI UK Launches First AI Chatbot to Support Customers

VOXI Launches AI Chatbot to Revolutionize Customer Services in...

Investing in digital infrastructure key to realizing generative AI’s potential for driving economic growth | articles

Challenges Hindering the Widescale Deployment of Generative AI: Legal,...

Burger King Launches AI Chatbot to Monitor Employee Courtesy Words like...

Burger King's AI-Powered 'Patty': A New Era in Customer Service or Corporate Overreach? Burger King’s AI Customer Service Voice: Progress or Privacy Invasion? In a world...

Teens Share Their Thoughts on AI: From Cheating Concerns to Using...

Navigating the AI Dilemma: Teens' Dual Perspectives on Chatbots in Schoolwork and Cheating Navigating the AI Wave: Teens Embrace Chatbots for Schoolwork, But Concerns Loom In...

Expert Warns: Signs of Psychosis Observed in Australian Users’ Interactions with...

AI Expert Warns of Psychosis and Mania Among Users: A Call for Responsible Tech Development in Australia The Dark Side of AI: A Call for...