Why is AI chatbot regulation moving faster than anything we’ve seen in tech?
The Trump administration doesn’t care about AI safety in general. Nor for children either. So how come we got one hundred laws passed by Congress in the past three months?
Tech regulation is famously slow. With AI doubling capacity every three to four months, how could legislators possibly catch up? Well, they do. Not on everything, but they are definitely passing laws. If you look at recent news about AI, you’ll see a flurry of regulations and laws passed in the past months, amounting to almost one hundred across both Republican and Democrat states.
The speed is interesting, because it is unprecedented. Let’s analyze it.
The Trump administration’s position on AI is simple: don’t get in the way.
It revoked Biden-era AI safety requirements, created a litigation task force to sue states that regulate AI too aggressively (can you imagine? ‘Merica did it), and released a national AI framework that doesn’t mention algorithmic bias, doesn’t propose an oversight body, and doesn’t require monitoring AI systems after they’re deployed. The message to the industry is clear: build fast, win the race against China, and Washington won’t get in the way.
And yet, in the middle of all this, AI chatbot safety legislation is passing through state legislatures faster than virtually any tech regulation in recent memory. Bills are clearing chambers with near-unanimous bipartisan votes. Tennessee just signed a law banning AI from impersonating mental health professionals. Washington, Idaho, and Georgia have all passed chatbot safety bills in the last few weeks. Nearly a hundred chatbot-specific bills are now active across the country.
“As state legislative sessions ramp up across the country, policymakers at both the state and federal levels have introduced dozens of bills aimed at chatbots... The Future of Privacy Forum (FPF) is currently tracking 98 chatbot-specific bills across 34 states... In both volume and policy focus, chatbot legislation is emerging as one of the most active areas of AI policymaking.”
— Policy Analysts, Future of Privacy Forum, March 12, 2026, The Chatbot Moment: Mapping the Emerging 2026 U.S. Chatbot Legislative Landscape
Maximum deregulation and record-speed regulation aren’t contradictory. They’re connected.
The administration’s executive order on AI explicitly carves out child safety from federal preemption. Preemption, in simple terms, means a federal law overrides state laws on the same topic. The White House wants to do that for AI broadly with the goal to replace the growing patchwork of state AI rules with a single, lighter national standard. But they said: not for child safety. States can keep those laws.
That carve-out came under pressure from Republican-controlled states that didn’t want the federal government overriding their own consumer protections. It was a political concession, not an act of conviction. But the effect has been enormous.
By leaving one door open, the administration effectively told every state legislator in the country: if you want to regulate AI and you want your law to survive, frame it around protecting children. And that’s exactly what happened. Legislators had bipartisan political cover (nobody votes against child safety). They had legal certainty that their laws wouldn’t be nullified. They had ready-made templates from California and New York. And they had the emotional catalyst that tech regulation almost never has: families whose children died after interacting with AI chatbots, followed by lawsuit settlements that proved the industry couldn’t defend its own products in court. As one tracking firm put it, ‘this isn’t a partisan topic, it’s a parent topic’.
That’s what surprised me. The one area of AI governance that’s actually moving ( aka the one area where laws are being signed within weeks of introduction) is child safety. Everything else (algorithmic discrimination in hiring, surveillance pricing based on your personal data, bias in healthcare decisions) remains either stalled or vulnerable to being overridden by a future federal standard.
The system moved fast because children died publicly enough to make inaction politically impossible. The slower, quieter harms of AI (being denied a job by an algorithm, being charged more for groceries because of your zip code) don’t generate the same urgency, even though they touch far more lives.





