New CA AI Regulations and What They Mean for Tech/AI Companies
October 2025
For years, AI startups thrived in the “move fast and break things” era. Now Sacramento is saying: “Move fast, but document it, disclose it, test it for bias, and make sure a human reviews it.”
California’s new AI laws — including SB 53, the AI Transparency Act, and new Automated Decision-Making regulations — are rewriting the rules for how AI companies operate.
If you’re building, selling, or using AI tools, your risk profile just changed. Here’s how, and what to do about it.
What the Laws Actually Do
SB 53 (September 2025) introduces safety, security, and incident reporting requirements for advanced AI models.
The AI Transparency Act (January 22026) requires disclosure and watermarking of AI-generated content.
AB 2013 (September 2024) forces companies to reveal what data they used to train their models.
Automated Decision-Making Rules (October 2025) limit the use of algorithms in hiring and HR unless they’re bias-tested and human-monitored.
CCPA extensions (Under administrative review) add new privacy and notice obligations when AI is involved in personal decisions.
So even if you’re “just a SaaS company using a bit of AI,” you may still fall under these laws.
The New Exposures No One Warned You About
Data Disclosure Liability:
If your training data includes copyrighted or personal information, you could face IP or privacy claims.Algorithmic Bias:
If your AI tool makes an unfair decision (even accidentally), it can lead to discrimination lawsuits or regulatory action.Transparency Gaps:
Failing to disclose AI use or content creation could violate the new transparency laws, or even your own contracts.Model Hallucinations:
If your AI product gives false information that causes a client loss, that’s not “bad output”; that’s potential E&O exposure.Regulatory Investigations:
Expect more agencies asking, “How was this model trained?” and “Who approved this output?”
How Insurance Needs to Catch Up
The problem? Most policies weren’t written with AI in mind. That’s where custom-tailored policies and creative restructuring are a must.
D&O covers executives but may exclude regulatory or internal whistleblower claims tied to algorithmic issues.
E&O / Tech Liability may only cover software defects, not model misjudgments.
Cyber handles breaches, but not misuse of data sets or AI-driven misinformation.
EPL covers discrimination, but not when the bias comes from code.
Insurers are rewriting language fast, but until they do, founders need to ask sharper questions.
The Founders’ Cheat Sheet
When reviewing your coverage, ask:
“Does my E&O cover algorithmic or AI-related errors?”
“Are regulatory investigations covered under D&O?”
“Is data training liability included in my cyber policy?”
“What exclusions mention AI, algorithms, or predictive analytics?”
If your broker can’t answer clearly, that’s your answer.
The Smart Move
Don’t wait for a lawsuit or regulator to explain your exposure. Get ahead by:
Documenting your AI process.
Keep logs, bias audits, and version control.Updating your contracts.
Make sure clients and vendors share risk fairly.Reviewing your coverage.
Make sure it’s not written for 2019 software companies.Partnering with advisors who get both AI and insurance.
Like Execurisk. Shameless plug.
The Execurisk Perspective
AI regulation isn’t a problem. It’s a predictable risk, and predictable risks are insurable. What’s not insurable is ignorance.
Our job isn’t to sell policies; it’s to make sure you don’t learn about exclusions the hard way.
Move fast. Stay covered. Built in LA.