California’s AI Crackdown: Is Regulation Moving Too Fast?
California’s latest legislative session saw Governor Gavin Newsom signing multiple bills aimed at regulating artificial intelligence. These laws tackle deepfakes, campaign transparency, and protections for performers against AI-generated likenesses. But are lawmakers getting ahead of themselves?
I recently joined Inside the Issues to discuss California’s approach to AI regulation, including the controversial SB 1047, which would force AI companies to prove their models won’t cause “catastrophic harm.”
Regulating the Unknown
Right now, AI regulation feels a bit like doomsday prepping—legislators are imagining worst-case scenarios and crafting laws before there’s concrete evidence of harm. The problem? This approach assumes risks that may never materialize, potentially stifling innovation in the process.
Take SB 1047, spearheaded by State Senator Scott Wiener. It flips the burden onto AI companies, requiring them to certify their models won’t cause harm—even though the bill itself doesn’t define what “harm” means. It’s a classic case of regulate first, figure it out later.
A Burden on Innovation?
Wiener argues that this bill merely enforces what big AI firms (like OpenAI, Google, and Anthropic) claim they’re already doing. But as I pointed out in the interview, there’s a big difference between voluntary commitments and government-mandated obligations. Once the law steps in, compliance costs skyrocket—meaning companies must pour resources into insurance, legal teams, and bureaucratic oversight instead of innovation.
This isn’t just about tech giants. The idea that only major corporations are affected is misleading. AI startups today are raising hundreds of millions of dollars, meaning many will be swept up in the regulation’s net. That’s why big names like Marc Andreessen oppose SB 1047, while others like Elon Musk support it. Strange bedfellows, indeed.
A Patchwork Nightmare
Another major issue? Regulatory fragmentation. If California sets strict AI rules while other states take different approaches, tech companies will face a compliance nightmare. Hawaii, for example, is pushing even more extreme laws that require companies to prove AI is beneficial before they can launch a product.
We’ve already seen how this plays out with privacy laws, where states have created a chaotic patchwork of regulations. Without federal intervention, AI could head down the same road, stifling progress rather than fostering responsible development.
What’s Next?
Newsom hasn’t yet signed SB 1047, and he may prefer to wait and see how the debate unfolds. Politically, that’s the safest move. Substantively, it’s also the right one—rushing into regulation before we fully understand AI’s trajectory could do more harm than good.
As California barrels ahead on AI legislation, the big question remains: Do we regulate based on hypothetical fears, or do we allow innovation to evolve and address real problems as they arise?
Let me know your thoughts in the comments.