Navigating the Evolving Landscape of State Chatbot Regulations
As of April 2026, an increasing number of states are enacting laws to regulate chatbots—particularly those designed to replicate human-like interactions. This shift responds to concerns regarding user safety and mental health impacts. Companies using chatbots now face complex compliance obligations that transition deployment from user experience decisions to potential legal and regulatory risks.
State Chatbot Laws Now in Effect or Newly Enacted
5 Key Regulatory Trends for Chatbot Operators
What Should Companies Do?
Bottom Line: Staying Compliant in a Rapidly Evolving Regulatory Environment
Ensuring Compliance in the Age of Chatbots: Navigating New Regulations
As we approach April 2026, an increasing number of states are stepping up to enact regulations concerning chatbots—particularly those designed to simulate platonic, intimate, or romantic relationships with users. These "companion" chatbots are at the forefront of a growing concern regarding mental health and the potential harms stemming from user interaction. This post explores the implications of these new laws for businesses, key regulatory trends, and practical steps companies can take to comply.
The Rise of Companion Chatbot Regulations
With chatbots becoming more integrated into our daily lives, from customer service to emotional support, legislative bodies are introducing laws that prioritize transparency and user safety, especially for minors. Let’s take a closer look at some of the notable regulations that have either been enacted or are soon to take effect across various states:
State Chatbot Laws Now in Effect or Newly Enacted
-
California SB 243: Mandatory disclosure of non-human status, implementation of mental health protocols, and protections for minors. Effective January 1, 2026.
-
Colorado AI Act (SB 24-205): Requires "reasonable care" to prevent algorithmic discrimination in high-risk AI systems. Effective June 30, 2026.
-
Idaho (SB 1297): Focuses on conversational AI safety and transparency, modeled after Nebraska’s regulations. Effective July 1, 2027.
-
Nebraska (LB 525): Introduced comprehensive safety and transparency obligations for conversational AI. Effective July 1, 2027.
-
Oregon (SB 1546): Regulates companion chatbots and includes a private right of action for users. Effective January 1, 2027.
-
Tennessee (SB 1580): Prohibits AI systems from impersonating licensed mental health professionals. Effective July 1, 2026.
-
Washington (HB 2225 / SB 1546): Requires mandatory disclosures for companion chatbots. Effective January 1, 2027.
-
New York’s AI Companion Models law: Mandates detection of suicidal ideation and clear user disclosures. Effective November 5, 2025.
5 Key Regulatory Trends for Chatbot Operators
1. Private Rights of Action
Recent state laws allow individuals to sue chatbot providers for statutory damages, increasing litigation risks and burdening companies with the responsibility of ensuring compliance.
2. Transparency: Non-Human Disclosure
Most new laws require clear disclosures upfront, particularly for interactions with minors or where confusion with human operators may occur—an essential step to safeguard user trust.
3. Minor Safety Protocols
Legislation increasingly mandates safeguards to detect and respond to potentially harmful user behavior, including crisis referral to hotlines and content filtering measures.
4. Professional Licensure Restrictions
Several laws now make it illegal for chatbots to impersonate licensed professionals, directly addressing deception and risks posed by unlicensed practice.
5. Disclosure of Data Sources
California’s recent legislation focuses on transparency regarding the sources of training data, creating implications for developers and those deploying AI-driven models.
What Should Companies Do?
To navigate this evolving landscape, companies utilizing chatbots must take the following actionable steps:
-
Regularly review AI-enabled tools: Ensure compliance with relevant state laws to identify any regulatory risks.
-
Update user interfaces: Make non-human disclosures clear and prominent.
-
Test minor safety protocols: Collaborate with internal teams or third-party vendors to ensure crisis response features are effective.
-
Audit content and scripts: Avoid suggesting any professional licensure unless properly authorized.
-
Review data practices: Align data handling with new transparency obligations.
-
Revise vendor agreements: Incorporate compliance responsibilities and clarify liability and indemnification provisions.
-
Assess insurance coverage: Consider chatbot-related claims and adjust coverage as necessary.
Bottom Line
With the regulatory landscape for chatbots evolving rapidly, organizations must remain vigilant in understanding and addressing the new obligations centered around transparency, minor protection, and user redress. Proactive compliance is crucial in mitigating enforcement and litigation risks in a landscape that is continually shifting.
For more information on navigating chatbot laws and ensuring compliance, reach out to Orrick’s team: Meg Hennessey and Caitlin Burke. Staying informed is your best strategy in this fast-changing environment.