California officially told chatbots to tell the truth.
Starting in 2026, any conversational AI that can be mistaken for a person will have to do so clearly reveal that it is not humanthanks to a new law signed this week by Governor Gavin Newsom.
Standard, Senate Bill No. 243this is the first move of this type in the USA some call it a milestone in AI transparency.
The law seems simple enough: if your chatbot can fool someone into thinking it's a real person, it must confess. But the details run deep.
It also introduces new safety requirements for children, requiring artificial intelligence systems to remind minors every few hours that they are talking to an artificial being.
In addition, companies will have to report to the state annually Office of Suicide Prevention about how their bots react to self-harm disclosures.
This is a sharp turn from the anything-goes AI landscape of just a year ago this reflects growing global concern about the emotional impact of AI on users.
You'd think it was inevitable, right? We've finally reached the point where people are developing relationships with chatbots, sometimes even romantic ones.
The difference between an “empathetic assistant” and a “deceptive illusion” has become razor-thin.
That's why the new rule also prohibits bots from posing as doctors or therapists – no more Dr. Phil AI moments.
When signing the draft act, the voivode's office emphasized that it is an element of a broader action aimed at, among others, protect Californians from manipulative or deceptive behavior by artificial intelligenceattitude presented in the state's broader digital security initiative.
There's another layer that fascinates me: the idea of ”truth in interaction.” A chatbot that admits “I'm AI” may seem trivial, but it changes the psychological dynamic.
Suddenly the illusion breaks – and maybe that's the point. It reflects California's broader trend toward accountability.
Earlier this month, lawmakers also passed a provision requiring companies to do so clearly label AI-generated contentdevelopment a transparency bill aimed at curbing deepfakes and disinformation.
Still, tension lingers beneath the surface. Technology leaders fear a regulatory patchwork—different states, different rules, all requiring different disclosures.
It's easy to imagine developers switching “AI disclosure modes” depending on location.
Legal experts are already speculating that enforcement could become unclear as the law depends on whether a “reasonable person” could be misled.
And who defines “reasonable” when artificial intelligence rewrites the norms of human-machine conversation?
The bill's author, Sen. Steve Padilla, insists it's about setting boundaries, not stifling innovation. And to be fair, California is not alone.
Europe You have the document has long pushed for similar transparency while India is new AI content labeling framework indicates that global dynamics are gaining momentum.
The difference is in tone – California's approach is personal, as if it protects relationships, not just data.
But here's the thing I keep coming back to: this law is both philosophical and technical. It's about fairness in a world where machines get the shots too good in pretending.
Perhaps in the age of perfectly written emails, perfect selfies and tireless AI companions, we actually need a law to remind us what is real and what is simply well-coded.
So yes, California's new rules may seem small at first glance.
But look closer and you'll see the beginning of a social contract between humans and machines. One that says, “If you're going to talk to me, at least tell me who – or what – you are.”