California just drew a bold line in the sand between humans and machines. Under a freshly signed law, AI chatbots must now disclose that they’re, well, not people at all.
It sounds almost comical — like your phone whispering “I’m a robot, don’t get weird” — but this is no joke.
The law, known as Senate Bill 243, was signed by Governor Gavin Newsom on October 13 and takes effect in 2026.
It requires any AI system that could be mistaken for a human to clearly tell users they’re interacting with artificial intelligence.
You can dive into the full details of how California just passed a new law requiring AI to tell you it’s AI.
This move is part of a wider wave of tech regulation sweeping across the state, where lawmakers are finally starting to put bumpers around an industry that’s been speeding unchecked.
Alongside this bill, Newsom signed others tackling deepfakes, AI accountability, and online safety.
He also vetoed a few that he felt went too far, saying overregulation could “choke innovation before it breathes.”
That delicate balance is described in coverage of Newsom’s signing spree on new AI laws, which highlights how California is trying to lead the national conversation without scaring away its tech powerhouses.
One of the law’s most interesting clauses focuses on mental health. If an AI chatbot is designed for companionship or emotional support, it now must regularly remind users that it’s not human — and report annually on how it handles users expressing suicidal thoughts.
That may sound heavy, but given the rise of emotionally immersive AI companions, lawmakers are saying transparency could literally save lives.
It’s a concern echoed in recent discussion about California’s growing scrutiny of chatbots and youth mental health, where advocates argued that young users in particular are vulnerable to forming attachments to bots that feel all too real.
What’s striking here is how narrowly targeted this law is. Some legislators wanted an outright ban on AI chatbots for minors, but Newsom pushed back, calling that a “blunt instrument” approach.
Instead, this version aims for nuance — regulate behavior, not existence.
The tug-of-war between innovation and safety was visible when the governor vetoed an earlier bill that would have heavily restricted AI chatbots for kids, arguing that responsible design is better than blanket bans.
The broader implication? We’re moving toward a world where AI is legally obligated to be honest about itself.
And while that sounds like common sense, it’s a radical shift in digital ethics. Transparency used to be a marketing promise — now it’s a legal requirement.
Other states are already watching closely, and similar bills are brewing in Washington, Oregon, and New York.
Even in Europe, the EU AI Act has similar provisions about disclosing machine-generated content, but California’s focus on conversation-based systems makes it a first-of-its-kind law in the U.S.
It’s hard not to have mixed feelings about this one. On one hand, you can’t help but applaud the honesty — people deserve to know when they’re talking to code.
On the other, it makes you wonder if this is just the first step in defining what AI citizenship might look like.
When bots start saying “Hi, I’m an AI,” it’s a strange blend of humility and power. You might shrug it off now, but in a few years, you’ll remember when California was the first to make the machines confess.