New York City Government AI Chatbot Misinforming Business Owners: Report
The New York City government recently introduced an AI chatbot on their MyCity portal to provide business owners with easy access to important information. However, a recent report from The Markup has revealed that this chatbot has been spreading misinformation, even giving incorrect and potentially harmful responses to users.
The chatbot, which is powered by Microsoft’s Azure AI, was meant to be a reliable source of information for business owners, directly from the city government’s websites. But in tests conducted by The Markup, it consistently provided wrong information on a variety of topics, including housing policies, workers’ rights, and more.
For example, when asked if a store can be cashless in New York City, the chatbot incorrectly responded with a “Yes”, despite the fact that cashless stores have been banned in the city since 2020. The report also found inaccuracies in responses regarding employers taking workers’ tips, landlords accepting section 8 vouchers, and businesses informing staff of scheduling changes. A housing policy expert criticized the chatbot as being “dangerously inaccurate”.
In response to these findings, a spokesperson for the NYC Office of Technology and Innovation stated that the chatbot is still in a pilot stage and users should not rely solely on its responses. They emphasized the importance of double-checking information provided by the chatbot and not using it as a substitute for professional advice.
Despite these challenges, the city remains committed to improving the chatbot to better assist small businesses. They have already received valuable feedback from users and plan to continue refining the tool to ensure accuracy and reliability.
It is important for users to exercise caution when using AI chatbots for important information and always verify the information provided through other sources. As technology continues to advance, it is crucial for governments and tech companies to prioritize accuracy and transparency in these tools to avoid spreading misinformation and potentially harmful advice.