In September 2024, California passed 17 new laws that target generative AI (GenAI) technologies. California’s new AI laws address critical issues such as deepfakes, AI watermarking, child and worker protection, and AI-generated misinformation. However, despite the progress, Governor Newsom vetoed a high-profile bill, Senate Bill (SB) 1047, to mandate safety measures for large AI models. This comprehensive legislative package makes California the first state in the U.S. to introduce wide-ranging AI regulation. The new laws will take effect on 1 January 2026 and are expected to impact a broad spectrum of technology companies. This post focuses on two key laws— Assembly Bill (AB) 2013  and Senate Bill (SB) 942 —and explores the implications of Newsom’s veto of SB 1047.

AB 2013: AI Training Data Transparency

AB 2013 introduces sweeping requirements for AI developers to be transparent about the data they use to train generative AI systems.

Key Requirements of AB 2013

Training Data Disclosure: From 1 January 2026, developers of generative AI systems must publicly post summaries of the datasets used to train their models. This disclosure must include:

  • The sources of the datasets.
  • The types of data used.
  • Whether the data is protected by copyright or contains personal information.
  • Whether datasets were purchased, licensed, or obtained from the public domain.

Who Must Comply?

AB 2013 applies to developers of any generative AI system made available to Californians, including free and paid services. Notably, it is also retroactive, applying to AI models released or updated after 1 January 2022. This would include most popular GenAI systems such as ChatGPT and Copilot. Even developers of smaller AI systems are covered, as the law does not include quantitative thresholds.

Implications for Developers

AB 2013 breaks from industry norms by mandating the disclosure of training data, which many developers have traditionally kept proprietary. This shift toward transparency mirrors global efforts, such as the EU’s Artificial Intelligence Act, which also pushes for the disclosure of AI training datasets. To comply, developers should audit their current datasets, especially if they include third-party content or data from public sources.

SB 942: AI Detection and Watermarking for Audiovisual Content

While AB 2013 focuses on training data, SB 942 targets AI-generated output, particularly in the audiovisual realm. This law requires developers of large AI systems to offer tools for detecting AI-generated content and include watermarking for transparency.

Key Requirements of SB 942

AI Detection Tools: Starting in 2026, developers of generative AI systems with more than 1 million monthly users must offer a free tool to detect AI-generated images, videos, and audio content. This tool should:

  • Allow users to determine if the content was created or altered by AI.
  • Provide embedded metadata to trace the content’s origins.
  • Support API access so third-party platforms can integrate the detection tool.
  • Watermarking: Developers must offer a watermarking option that identifies AI-generated content. The watermark must be clear, conspicuous, and difficult to remove, ensuring transparency in content creation.

Who Must Comply?

SB 942 applies to generative AI systems that produce audiovisual content and have over 1 million monthly users. Unlike AB 2013, this law only applies to large-scale providers of AI systems that are available to the public in California.

Implications for Developers

SB 942 aims to prevent the spread of misinformation and disinformation through AI-generated media. Developers must build compliance mechanisms into their systems, including watermarking capabilities and public-facing AI detection tools. This may require significant changes to both user interfaces and back-end systems. Developers should also ensure their third-party licensees adhere to these requirements, as noncompliance could lead to penalties.

Newsom’s Veto of SB 1047: What’s Next?

In contrast to his support for AB 2013 and SB 942, Governor Newsom vetoed SB 1047. This bill was designed to mandate safety measures for large AI systems, such as those that could create catastrophic risks like weaponisation or infrastructure attacks. Newsom cited concerns over the bill’s focus on large AI models, arguing that regulation should be based on the actual risks posed by AI systems, not just their size. This mirrors the product liability-centric approach taken by the EU AI Act.

Despite this veto, Newsom emphasised that California will continue to lead in AI regulation. The focus will shift toward solutions that consider the real-world risks of AI deployment, particularly in sensitive sectors like critical infrastructure and decision-making systems.

What Should Developers Do Now?

With the effective date of 1 January 2026, developers must begin preparing to meet the requirements of AB 2013 and SB 942. To stay ahead of compliance, AI companies should:

  • Conduct data audits: review the datasets used to train their AI models, including data licensing agreements and public source information.
  • Build detection and watermarking tools: start developing AI detection tools and watermarking features for audiovisual content, especially for systems with large user bases.
  • Monitor legislative updates: keep an eye on potential changes to these laws before their implementation date, as the AI regulatory landscape continues to evolve.

Insights: California leading the way in GenAI regulation

California’s new AI laws mark a significant step in balancing innovation with public safety. Governor Newsom has made it clear that the state will continue to set the global standard for regulating GenAI technology, while still encouraging its responsible deployment. As more states and countries follow suit, AI developers everywhere should prepare for a future where transparency and accountability are at the core of AI innovation.