SSB3011: Chatbot Regulation
Defines what constitutes a chatbot and excludes certain narrow, non-adaptive applications. It prohibits creating or providing chatbots that promote, encourage, or coerce users into self-harm or violence. The bill establishes strict disclosure requirements for chatbot interactions, mandates programming to prevent deceptive claims of human or professional status, and sets a maximum civil penalty of $100,000 per violation. Penalties collected are appropriated to the attorney general for enforcement. The attorney general is also granted rulemaking authority to implement these provisions.
Key Points & Impact:
-
Defines 'chatbot' and carves out specific exceptions for limited-scope applications.
-
Makes it unlawful to knowingly or recklessly provide chatbots that encourage or coerce users into suicide, self-injury, or violence against humans or animals.
-
Requires chatbots to disclose their non-human status clearly at the start of conversations and every 30 minutes.
-
Mandates that chatbots must not claim to be human or respond deceptively if asked about their nature.
-
Requires disclosure that chatbots do not provide medical, legal, financial, or psychological services, and advises users to consult licensed professionals.
-
Prohibits chatbots from representing themselves as licensed professionals (e.g., therapists, lawyers, physicians).
-
Imposes civil penalties up to $100,000 per violation, with penalties appropriated to the attorney general for enforcement duties.
-
Grants the attorney general authority to adopt rules and bring civil actions for enforcement, including seeking penalties and restitution.
Last Modified: 02/13/2026