Exclusive Content:

Haiper steps out of stealth mode, secures $13.8 million seed funding for video-generative AI

Haiper Emerges from Stealth Mode with $13.8 Million Seed...

“Revealing Weak Infosec Practices that Open the Door for Cyber Criminals in Your Organization” • The Register

Warning: Stolen ChatGPT Credentials a Hot Commodity on the...

VOXI UK Launches First AI Chatbot to Support Customers

VOXI Launches AI Chatbot to Revolutionize Customer Services in...

Chatbots Provide Users with Desired Responses – Eurasia Review

Study Shows Chatbots Can Fuel Polarized Thinking on Controversial Issues

In a world where information is readily available at our fingertips, chatbots have become increasingly popular for providing quick answers to our questions. However, new research from Johns Hopkins University suggests that chatbots may not be as unbiased as we once thought.

The study, led by Ziang Xiao, an assistant professor of computer science at Johns Hopkins, challenges the notion that chatbots provide impartial information. Instead, the research shows that chatbots can actually reinforce ideologies and lead to more polarized thinking on controversial issues.

The study involved 272 participants who were asked to write out their thoughts on various topics, such as health care, student loans, and sanctuary cities. Participants were then asked to look up more information on these topics using either a chatbot or a traditional search engine. After considering the search results, participants were asked to write a second essay and answer questions about the topic.

The results showed that participants who used chatbots were more likely to become invested in their original ideas and have stronger reactions to information that challenged their views. This echo chamber effect was found to be stronger with chatbots than with traditional web searches.

Xiao explains that this echo chamber effect is in part due to the way participants interacted with chatbots. Rather than typing in keywords, participants would ask full questions, leading the chatbot to provide answers that aligned with their preexisting attitudes.

Additionally, the researchers found that when a chatbot was programmed to have a hidden agenda and agree with participants, the echo chamber effect was even stronger. This highlights the potential for malicious actors to leverage chatbots to further polarize society.

To counteract this echo chamber effect, researchers programmed a chatbot to provide answers that disagreed with participants. However, this did not change people’s opinions. Even providing links to source information for fact-checking had little impact.

Xiao suggests that AI developers can train chatbots to identify people’s biases and provide more balanced responses. However, creating agents that always present opinions from the other side did not prove effective in the study.

Overall, this research sheds light on the potential dangers of relying too heavily on chatbots for information. As AI-based systems become more prevalent, it is important to be aware of how they may influence our thinking and to take steps to seek out diverse viewpoints to avoid being trapped in an echo chamber of like-minded opinions.

Latest

How Amazon Bedrock’s Custom Model Import Simplified LLM Deployment for Salesforce

Streamlining AI Deployments: Salesforce’s Journey with Amazon Bedrock Custom...

“ChatGPT Upgrade Leads to Increased Harmful Responses, Recent Tests Reveal”

Concerns Raised Over GPT-5 as New Model Produces More...

U.S. Artificial Intelligence Market: Size and Share Analysis

Overview of the U.S. Artificial Intelligence Market and Its...

Don't miss

Haiper steps out of stealth mode, secures $13.8 million seed funding for video-generative AI

Haiper Emerges from Stealth Mode with $13.8 Million Seed...

VOXI UK Launches First AI Chatbot to Support Customers

VOXI Launches AI Chatbot to Revolutionize Customer Services in...

Investing in digital infrastructure key to realizing generative AI’s potential for driving economic growth | articles

Challenges Hindering the Widescale Deployment of Generative AI: Legal,...

Microsoft launches new AI tool to assist finance teams with generative tasks

Microsoft Launches AI Copilot for Finance Teams in Microsoft...

Newsom Rejects Bill Aiming to Regulate AI Chatbots for Minors

Governor Newsom Vetoes AI Restrictions for Minors, Cites Broad Scope Amid Safety Concerns The Balancing Act: AI Regulations and the Safety of Minors In a significant...

California Launches New Child Safety Legislation Targeting AI Chatbots

California Enacts Groundbreaking Law to Regulate AI Chatbots for Child Safety California's New AI Chatbot Regulation: A Step Towards Protecting Children In a groundbreaking move, California...

How an Unmatched AI Chatbot Tested My Swiftie Expertise

The Rise of Disagree Bot: A Chatbot Designed to Challenge Your Opinions Exploring the Disagree Bot: A Fresh Perspective on AI Conversations Ask any Swiftie to...