California’s finally done what the rest of the country’s been tiptoeing around: it told artificial intelligence to keep its hands off the kids.
Earlier this week, Governor Gavin Newsom signed a groundbreaking law requiring that all AI chatbots clearly tell users — especially minors — that they’re talking to a machine.
The decision came right after he vetoed a broader proposal that would’ve banned minors from using AI chatbots entirely.
It’s a bold middle ground, one that tries to protect children without putting tech companies in a legal chokehold.
The new rules, known as Senate Bill 243, are a big shift in how AI companies will operate in the state.
Chatbots that mimic human conversation must now state, upfront and repeatedly, that they’re artificial.
And if they detect risky topics like self-harm, they’ll need to offer crisis support links and hand the conversation off to real human help.
It’s not about killing innovation — it’s about giving parents some breathing room in a world where an app can sound more empathetic than a teacher or friend.
Newsom’s move might sound cautious, but it’s far from a retreat.
His team made it clear that while the vetoed measure had good intentions, its reach was too broad and could’ve accidentally outlawed beneficial tools like AI tutors or emotional-support assistants.
That concern echoes what several analysts pointed out this week when California’s next AI chatbot battle started brewing in the legislature.
The tone in Sacramento right now feels less like panic and more like a parent finally saying, “Alright, enough screen time — but you can still play.”
What’s interesting is how this local policy might ripple across the tech world.
Similar rules in the European Union are already shaping how AI companies design their user interfaces, and some U.S. lawmakers are quietly looking to California as a test case.
The Verge described it perfectly in a recent piece, noting that California’s new AI disclosure rule could set a precedent for national regulation.
If that happens, expect a lot of apps to start their conversations with, “Hi, I’m not a real person — but I’m here to help.”
And here’s where things get messy. Some major platforms are already scrambling to self-regulate before regulators come knocking.
Meta, for example, just rolled out new parental controls for its AI systems that interact with teens, adding transparency features and moderation alerts.
But let’s be honest: filters and fine print only go so far. When a 14-year-old starts chatting with a bot that’s always awake, always kind, and never argues — that relationship feels real, and that’s what worries psychologists most.
As a parent myself, I can’t help but feel torn. Technology isn’t the villain here — we’ve all seen kids thrive when they use AI for learning, coding, or just expressing themselves.
But when the same tech starts playing therapist or friend, that’s a blurry line. California’s decision forces us to look straight at it and ask: where does companionship end, and control begin?
The truth is, this debate’s just heating up. Other states are reportedly drafting their own versions of these laws, and if the federal government picks up the baton, the next generation of chatbots could look very different — less human-like, more transparent, maybe even more honest. It’s weird to say, but perhaps the most human thing AI can do right now… is admit it’s not.


