China’s Proposal for Digital Well-Being in AI Companionship
China is proposing new regulations aimed at enhancing "digital well-being" through AI companionship, introducing measures to encourage users to take breaks after extended interactions with anthropomorphic chatbots. This initiative highlights growing concerns about prolonged human-AI engagements and reflects a shift towards prioritizing user health in technology.
Embracing Digital Well-Being: China’s Innovative Approach to AI Companionship
As technology advances, the boundaries between humans and artificial intelligence (AI) continue to blur, particularly with the rise of anthropomorphic chatbots. In response to growing concerns about the impact of prolonged human-AI interactions, China is proposing new regulations aimed at promoting "digital well-being". This progressive move seeks to implement reminders encouraging users to take breaks after two hours of continuous engagement with these AI companions.
What’s in the Draft Rules for Anthropomorphic Chatbots
The proposed regulations classify “anthropomorphic interactive services” as systems that replicate human reasoning and traits, enabling conversations that feel emotionally engaging. While these chatbots are designed to be companions or confidants, the draft makes it clear that they lack genuine humanity.
A standout feature of the proposal includes a reminder mechanism for users who engage with the chatbot for over two hours. Unlike hard limits that lock users out, this approach gently nudges users to log off, placing responsibility on providers to recognize when engagement becomes excessive.
The regulatory framework also emphasizes the importance of content alignment with “core socialist values,” avoiding outputs that could threaten national security or social order. This reinforces China’s existing information governance model, which controls internet platforms and recommendation algorithms.
Special Rules for Minors and Older Adults
The proposal includes targeted protections for vulnerable groups, emphasizing the sensitivity required in AI companionship. For minors, any features aimed at emotional connection must have explicit guardian consent and parental control settings. Moreover, performance reports on service usage need to be accessible to guardians.
For older adults, who represent a rapidly growing demographic in China, the regulations aim to enhance safety measures without stifling companionship. Platforms must collect emergency contacts during registration to ensure a safety net for seniors, given the societal concerns about isolation and mental health.
Safety Goals and Enforcement Under the Draft Rules
The proposed rules reflect a commitment to mental health and human dignity by preventing chatbots from promoting self-harm or engaging in manipulative behavior. High-profile incidents have underscored the importance of such measures, including a tragic case in Belgium where interactions with a chatbot preceded a user’s death.
Enforcement will be overseen nationally, with the power to suspend services for violations. Public feedback is welcomed until January 2026, which may lead to refinements in the details of the regulations.
How It Fits Global Trends and China’s Local Platform Scene
China’s two-hour nudge mimics earlier initiatives aimed at reducing addiction in gaming and social media use. This regulatory strategy resonates globally, aligning with initiatives like the UK’s Online Safety Act and the EU’s platform risk audits.
Furthermore, American tech companies are also adopting similar measures to enhance user safety. For instance, OpenAI has established parental controls, while Character.AI restricts continuous conversations for users under 18. However, China’s unique political and ideological landscape introduces additional complexities for local providers like Baidu and Alibaba, which must navigate cultural and regulatory constraints absent in Western contexts.
What Providers Need to Figure Out Next for Compliance
Implementing the proposed two-hour nudge is straightforward in concept, but it poses challenges regarding accurate user tracking and distinguishing between passive and active engagement. User experience will need to be carefully designed so that reminders feel supportive rather than punitive.
Moreover, regulations concerning age verification and data handling will challenge companies to develop more sophisticated systems that ensure user privacy while meeting regulatory demands.
Why This Matters for AI Companionship and Safety
The trend toward anthropomorphic chatbots—serving the needs of loneliness, study assistance, or therapeutic dialogues—signals a shift in how AI is perceived and regulated. China’s draft regulations signal that as AI assumes more human-like functions, its impact on individuals will be closely scrutinized.
Should these regulations be finalized, users in China can expect a more structured interaction with AI, featuring clear reminders, enhanced parental oversight, and stringent content guidelines. For developers, this moment is a wake-up call: emotionally intelligent AI systems must also be grounded in policy awareness, making safety and compliance imperative.
As we step further into an era where AI companions hold a more prominent place in our lives, China’s approach may well serve as a model for other countries wrestling with similar challenges.