6 Changes to CA AI & Privacy Laws, and How to Navigate Them
In January 2026, a number of new laws aimed at strengthening consumer privacy and ensuring AI safety during continued innovation, are coming into effect. Most businesses in CA will be affected to some degree, even small ones. Below are circumstances where CA businesses will be affected by these changing laws, and what they can do to be prepared.
1. Applies to any business with California users or employee data, including (but not limited to), e-commerce companies, consumer apps, and SaaS and AI companies storing customer/user data
In the event of a data breach:
Current: Businesses are required to provide notification in the most expedient time possible and without unreasonable delay.
New (SB 446): Businesses are required to provide notice to affected customers within 30 calendar days of the breach, and when more than 500 CA residents are affected, the California Attorney General within 15 days of notifying customers.
Why This Matters: This 30-day time limit accelerates incident response costs, legal counsel spending, PR & crisis management, and notification/call center costs. Businesses will want to make sure their Cyber liability policies can not only cover associated costs, but also actually step in and help arrange the necessary activities the business will need to take post-breach.
What Businesses Can Do:
Review Cyber policies to make sure they include broad reach response, and not low sublimits. Additionally, they should validate that coverage for regulatory investigations (including CCPA) is explicitly included.
Review vendor contracts to make sure your business does not trigger downstream liability. Note that even if your contracts are relatively tight, you’ll still be liability for legal fees if even just named in a lawsuit because of a downstream trigger from a third-party vendor, which you’ll want to make sure is covered under your Cyber policy.
Conduct data breach simulation so if/when a breach occurs, leadership is ready and knows how to take action, vs. wasting time figuring out what needs to be done and how.
2. Applies to registered data brokers and businesses whose core model involves selling or sharing consumer data (e.g., data brokers, adtech/martech platforms, some retail/e-commerce, and social platforms)
When a consumer submits a data deletion or opt-out request:
Current: Data brokers must register annually with the CPPA, and they must provide business info and include themselves in the deletion mechanism created by the DELETE Act.
New (SB 361): Registered data brokers that sell or share consumer data must provide more detailed information during their annual registration with the CPPA, including specific details about data collection, data sales, and consumer access and deletion processes.
Why This Matters: Together with the Delete Act and its implementing regulations, SB 361 means brokers must be ready to reliably process large volumes of deletion requests, document how they do so, and demonstrate compliance in audits.
What Businesses Can Do:
Review your Cyber liability policy for regulatory coverage. Not all policies cover this, and some cover it with low sublimits. If your business brokers any kind of user data, you’ll want to know that you have strong regulatory coverage to cover any associated costs.
Ensure that your E&O coverage includes data-handling errors and misrepresentation.
Check that your contracts with vendors protect you from any liabilities they may carry. The best way to be sure of this is to run them by an attorney.
Map data flows with a risk professional to highlight risk transfer gaps.
3. Applies to any company developing or using AI
When AI causes any type of harm:
Current: The general duty of care exists, but it does not explicitly say that when damage or injury is caused by AI, the company using it/procuring it is held liable.
New (AB 316): Businesses that develop, modify, or use AI cannot avoid liability by arguing that an AI system acted autonomously. The traditional duty of care still applies even when AI is involved.
Why This Matters: Companies will be held liable for algorithmic bias or discrimination, and product liability lawsuits will be tied to model errors. Companies will have a broader duty to maintain oversight and documentation of AI usage and implications, while E&O claims will likely increase in frequency as higher litigation exposures for AI-reliant products emerge.
What Businesses Can Do:
Review your Tech E&O for explicit coverage of algorithmic errors, AI-generated recommendations, autonomous decision-making failures, and bias, discrimination, or unfair outcome claims.
Secure endorsements that expand coverage to AI operations.
Confirm that Cyber policies cover AI incidents, including training data mishandling, model poisoning, and model drift failures.
Ensure your business is compliant by implementing an AI governance framework aligned with NIST AI RMF or ISO 42001.
Document model development , testing, and guardrails thoroughly and establish human review for high-impact decisions.
Keep audit trails for model versions, training data sources, fine-tuning steps, and deployment logs.
4. Applies to frontier model labs, large AI developers that train frontier models, AI infrastructure companies, and enterprises that build or host powerful models meeting the statutory thresholds. Companies that later modify or fine-tune those models may be indirectly affected via contractual and governance expectations.
When companies modify or fine-tune frontier models:
Current: There currently is no explicit law around public safety and disclosure.
New (SB 53): Developers of advanced “frontier AI models” must publish catastrophic-risk mitigation plans, report certain safety incidents to CA’s Office of Emergency Services, and demonstrate internal safety oversight.
Why This Matters: Public safety disclosures create litigation exposure, while misalignments between public statements and internal practices create D&O risk. Mandatory reporting to California OES creates regulatory oversight. Failing to report an incident creates potential negligence claims.
What Businesses Can Do:
Review your D&O coverage for Regulatory investigations related to AI safety issues, shareholder lawsuits alleging misleading disclosures, and Oversight failures.
Review E&O coverage for AI development and safety claims.
Look for exclusions involving “High-risk AI”, “Model-training incidents”, and “Product design”.
Formalize compliance processes by developing a formal AI Safety Plan before disclosures are required, which should include aligning internal processes with what will be publicly claimed, creating a reporting mechanism for “critical safety incidents,” implementing a cross-functional safety review board, and documenting all safety mitigations and testing procedures.
5. Applies to businesses using any apps where conversational AI could be mistaken for a human, including chatbots
When companies use conversational AI (in the form of a chatbot or other) and it could be mistaken for a human:
Current: There currently is no explicit law around AI companions/chatbots and having to disclose they are AI, not human.
New (SB 243): Bots must clearly disclose to users that they are AI, not human, when a reasonable user might be misled. Operators of companion-chatbot platforms must implement safety protocols to prevent the production of content encouraging self-harm or suicidal ideation, and must publish such protocols publicly. Enhanced protections apply when a user is a minor: additional disclosures, suitability warnings, and stricter content controls. The law prohibits certain manipulative design features (e.g., unpredictable reward patterns meant to encourage engagement). SB 243 also establishes a private right of action (i.e., individuals harmed by violations can sue), which many existing privacy or content-safety laws did not have in the same form.
Why This Matters: Because prior law regulated platforms generally (not AI-specific chatbots), many operators of generative-AI chatbots were operating without clear legal guidance. AI chatbots are now not only required to explicitly be labeled as AI, businesses also have a specific duty to safeguard against self-harm advice, manipulative engagement, or mental-health risks or face possible legal action. Aside from the physical work this will take to be compliant, liability for harmful or inappropriate AI-generated content will increase, there will be greater enforcement for failing to disclose AI use, higher obligations when minors interact with chatbots, and an increase in negligence, emotional distress, or product liability lawsuits.
What Businesses Can Do:
Review your E&O policy for content liability, failure to warn, improper output generation, and insufficient content moderation.
Ensure Cyber includes wrongful collection of user data from chatbot interactions.
Evaluate any exclusions in either policy (and D&O) related to generative AI, user-generated content, or psychological or emotional harm
Formalize compliance processes by require visible and frequent AI disclosures in the UI, implement content filtering and monitoring for self-harm content, sexual content, or harmful advice
Add age-warning mechanisms and safeguards for minors, maintain logs of flagged content and moderation actions, and build an appeals or human escalation process for high-risk interactions.
6. Applies to businesses using algorithmic optimization of pricing
When companies use pricing algorithms for their products/services:
Current: Under the existing Cartwright Act, price-fixing required proof of an agreement between competitors. Algorithmic coordination was not explicitly addressed, making it harder for plaintiffs to prove an “agreement” when prices were set by shared or similar algorithms rather than direct human communication.
New (AB 325): AB 325 makes it clear that an “agreement” can be inferred from the use or distribution of a common pricing algorithm, even without evidence of explicit human coordination.
Why This Matters: Using the same algorithm as competitors may appear collusive. Antitrust suits no longer require explicit human agreement. Vendors become a liability vector if their algorithms coordinate pricing, which creates a whole new level of liability for third-parties. Plus, investigations under California’s Cartwright Act are costly.
What Businesses Can Do:
Review your D&O policy for antitrust coverage; many mid-market companies have low or no protection.
Ensure your E&O covers vendor-led pricing mistakes.
Evaluate any exclusions in either policy for antitrust, unfair competition, and deceptive trade practices.
Document independent pricing strategies by maintain logs showing independent decision-making separate from vendor recommendations.
Audit vendors supplying pricing tools by obtaining governance documentation.
Review ML models to ensure they do not incorporate competitor data.
Train product and pricing teams on antitrust risks of algorithmic tools.
Next Steps
Book a call with Execurisk to see where your current insurance coverage and risk management processes will leave you exposed come January 1st.