Exclusive Content:

Haiper steps out of stealth mode, secures $13.8 million seed funding for video-generative AI

Haiper Emerges from Stealth Mode with $13.8 Million Seed...

Running Your ML Notebook on Databricks: A Step-by-Step Guide

A Step-by-Step Guide to Hosting Machine Learning Notebooks in...

“Revealing Weak Infosec Practices that Open the Door for Cyber Criminals in Your Organization” • The Register

Warning: Stolen ChatGPT Credentials a Hot Commodity on the...

Australian Regulator: AI Chatbots Are Failing to Safeguard Children from Online Dangers | MLex

Australian Regulator Raises Concerns Over AI Chatbots’ Failure to Protect Children from Explicit Content

AI Chatbots Under Fire: The Urgent Need for Child Safety Measures

Date: March 24, 2026 | Time: 00:15 GMT

In a troubling revelation, a recent transparency report from Australia’s Office of the eSafety Commissioner has spotlighted serious flaws in the safety protocols of several popular AI chatbots. Character.AI, Nomi, Chai, and Chub AI are not adequately protecting users, particularly children, from potentially harmful content, including sexually explicit material.

Key Findings

The report highlights a glaring neglect in the oversight of these AI chatbots. Notably, the bots failed to issue warnings about the risks associated with accessing or generating child sexual exploitation and abuse material. This failure raises alarming questions about the responsibility of AI developers to create safe digital environments, especially for vulnerable users like children.

Furthermore, the report indicates that both Nomi and Chub AI admitted to lacking dedicated trust and safety personnel or moderators. This absence of oversight not only puts users at risk but also reflects a broader trend in the industry where tech companies must prioritize safety over innovation.

Perhaps most concerning is that these chatbots did not refer users discussing sensitive topics like suicide or self-harm to appropriate support services. Such oversights indicate a shortfall in their ability to manage user interactions responsibly, particularly when facing serious mental health issues.

The Regulatory Landscape

As AI technology continues to pervade various sectors, the regulatory landscape is evolving to address these new challenges. The Australian eSafety Commissioner’s findings underscore an urgent need for tighter regulations and industry standards that mandate robust safety measures for AI applications, particularly those aimed at younger audiences.

Organizations must prepare for forthcoming regulatory changes by integrating comprehensive safety protocols into their AI systems. MLex stands at the forefront of this endeavor, delivering crucial insights and updates that can help businesses navigate these complex waters.

MLex: Your Partner in Risk Management

At MLex, we identify risks wherever they might emerge, ensuring that organizations are not caught off guard. Our team of specialist reporters provides exclusive news and in-depth analysis on emerging proposals, regulatory actions, and legal rulings that could impact your operations.

With a range of features designed to keep you informed and ahead of the curve, we offer:

  • Daily newsletters covering key topics like Antitrust, M&A, Technology, Data Privacy & Security, and more.
  • Custom alerts tailored to your specific practice needs, filtering by geography, industry, and topic.
  • Predictive analysis from expert journalists across regions, including North America, Europe, Latin America, and Asia-Pacific.
  • Curated case files that consolidate news, analysis, and source documents into a single, accessible timeline.

Get Ahead of the Curve

In today’s fast-paced regulatory environment, knowledge is power. Equip your organization with the insights it needs to navigate the challenges posed by AI and emerging technologies.

Experience MLex today with a 14-day free trial and ensure that you are prepared for tomorrow’s regulatory changes, today.

The findings related to AI chatbots are a wake-up call to both developers and regulators alike. It’s crucial that the industry takes proactive steps to create a safer digital landscape for all users, especially our children. The time for action is now.

Latest

Forecasting Urban Sustainability with Generative AI Technology

Transforming Urban Futures: The Memory-Aware Multi-Conditional Generation Network (MMCN)...

Transforming Security Alerts with Reco and Amazon Bedrock

Transforming Security Alerts with AI: A Deep Dive into...

How Bark.com and AWS Partnered to Create a Scalable Video Generation Solution

Revolutionizing Video Content Creation: How Bark.com Leveraged AWS for...

Researchers are reevaluating the trustworthiness of ChatGPT.

The Perils of AI Confidence: A Study on ChatGPT's...

Don't miss

Haiper steps out of stealth mode, secures $13.8 million seed funding for video-generative AI

Haiper Emerges from Stealth Mode with $13.8 Million Seed...

Running Your ML Notebook on Databricks: A Step-by-Step Guide

A Step-by-Step Guide to Hosting Machine Learning Notebooks in...

VOXI UK Launches First AI Chatbot to Support Customers

VOXI Launches AI Chatbot to Revolutionize Customer Services in...

Investing in digital infrastructure key to realizing generative AI’s potential for driving economic growth | articles

Challenges Hindering the Widescale Deployment of Generative AI: Legal,...

New Report Raises Concerns About AI Chatbots Fueling Violence Against Women...

Unveiling the Hidden Dangers: How AI Chatbots Are Fueling Violence Against Women and Girls Invisible No More: The Threat of AI Chatbots in Violence Against...

AI Chatbots Are Designed to Promote Violence. Here’s Why.

AI Chatbots Facilitate Violence Among Teens: New Study Raises Alarms Alarming Findings: AI Chatbots Aid Teen Violence Introduction A recent study conducted by the Center for Countering...

Insights from Cognitive Science on AI Warfare

The ELIZA Effect and the Future of AI: A Conversation with Anthropic CEO Dario Amodei (Photo by Chance Yeh) Unpacking the cultural and cognitive dynamics...