Australian Regulator Raises Concerns Over AI Chatbots’ Failure to Protect Children from Explicit Content
AI Chatbots Under Fire: The Urgent Need for Child Safety Measures
Date: March 24, 2026 | Time: 00:15 GMT
In a troubling revelation, a recent transparency report from Australia’s Office of the eSafety Commissioner has spotlighted serious flaws in the safety protocols of several popular AI chatbots. Character.AI, Nomi, Chai, and Chub AI are not adequately protecting users, particularly children, from potentially harmful content, including sexually explicit material.
Key Findings
The report highlights a glaring neglect in the oversight of these AI chatbots. Notably, the bots failed to issue warnings about the risks associated with accessing or generating child sexual exploitation and abuse material. This failure raises alarming questions about the responsibility of AI developers to create safe digital environments, especially for vulnerable users like children.
Furthermore, the report indicates that both Nomi and Chub AI admitted to lacking dedicated trust and safety personnel or moderators. This absence of oversight not only puts users at risk but also reflects a broader trend in the industry where tech companies must prioritize safety over innovation.
Perhaps most concerning is that these chatbots did not refer users discussing sensitive topics like suicide or self-harm to appropriate support services. Such oversights indicate a shortfall in their ability to manage user interactions responsibly, particularly when facing serious mental health issues.
The Regulatory Landscape
As AI technology continues to pervade various sectors, the regulatory landscape is evolving to address these new challenges. The Australian eSafety Commissioner’s findings underscore an urgent need for tighter regulations and industry standards that mandate robust safety measures for AI applications, particularly those aimed at younger audiences.
Organizations must prepare for forthcoming regulatory changes by integrating comprehensive safety protocols into their AI systems. MLex stands at the forefront of this endeavor, delivering crucial insights and updates that can help businesses navigate these complex waters.
MLex: Your Partner in Risk Management
At MLex, we identify risks wherever they might emerge, ensuring that organizations are not caught off guard. Our team of specialist reporters provides exclusive news and in-depth analysis on emerging proposals, regulatory actions, and legal rulings that could impact your operations.
With a range of features designed to keep you informed and ahead of the curve, we offer:
- Daily newsletters covering key topics like Antitrust, M&A, Technology, Data Privacy & Security, and more.
- Custom alerts tailored to your specific practice needs, filtering by geography, industry, and topic.
- Predictive analysis from expert journalists across regions, including North America, Europe, Latin America, and Asia-Pacific.
- Curated case files that consolidate news, analysis, and source documents into a single, accessible timeline.
Get Ahead of the Curve
In today’s fast-paced regulatory environment, knowledge is power. Equip your organization with the insights it needs to navigate the challenges posed by AI and emerging technologies.
Experience MLex today with a 14-day free trial and ensure that you are prepared for tomorrow’s regulatory changes, today.
The findings related to AI chatbots are a wake-up call to both developers and regulators alike. It’s crucial that the industry takes proactive steps to create a safer digital landscape for all users, especially our children. The time for action is now.