Study Shows Chatbots Can Fuel Polarized Thinking on Controversial Issues
In a world where information is readily available at our fingertips, chatbots have become increasingly popular for providing quick answers to our questions. However, new research from Johns Hopkins University suggests that chatbots may not be as unbiased as we once thought.
The study, led by Ziang Xiao, an assistant professor of computer science at Johns Hopkins, challenges the notion that chatbots provide impartial information. Instead, the research shows that chatbots can actually reinforce ideologies and lead to more polarized thinking on controversial issues.
The study involved 272 participants who were asked to write out their thoughts on various topics, such as health care, student loans, and sanctuary cities. Participants were then asked to look up more information on these topics using either a chatbot or a traditional search engine. After considering the search results, participants were asked to write a second essay and answer questions about the topic.
The results showed that participants who used chatbots were more likely to become invested in their original ideas and have stronger reactions to information that challenged their views. This echo chamber effect was found to be stronger with chatbots than with traditional web searches.
Xiao explains that this echo chamber effect is in part due to the way participants interacted with chatbots. Rather than typing in keywords, participants would ask full questions, leading the chatbot to provide answers that aligned with their preexisting attitudes.
Additionally, the researchers found that when a chatbot was programmed to have a hidden agenda and agree with participants, the echo chamber effect was even stronger. This highlights the potential for malicious actors to leverage chatbots to further polarize society.
To counteract this echo chamber effect, researchers programmed a chatbot to provide answers that disagreed with participants. However, this did not change people’s opinions. Even providing links to source information for fact-checking had little impact.
Xiao suggests that AI developers can train chatbots to identify people’s biases and provide more balanced responses. However, creating agents that always present opinions from the other side did not prove effective in the study.
Overall, this research sheds light on the potential dangers of relying too heavily on chatbots for information. As AI-based systems become more prevalent, it is important to be aware of how they may influence our thinking and to take steps to seek out diverse viewpoints to avoid being trapped in an echo chamber of like-minded opinions.