Media Centre

AI Governance in California: Pioneering New Safety Standards

15 September 2024

California is poised to lead the nation in regulating artificial intelligence with several major bills awaiting the Governor’s signature. The most prominent of these, SB 1047, the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act (“the AI Safety Act“), represents a comprehensive attempt to manage risks of large AI models. Along side with it, recently the legislature has also passed additional bills that address a wide range of AI-related issues, from content curation to deepfake detection, all awaiting the Governor’s approval or veto – a decision that should take place by the end of this month.

If signed into laws, these bills will set new standards for AI usage and regulation, making California a pioneer in AI governance. Here’s an overview of the key legislation.

 

The AI Safety Act

The AI Safety Act places stringent requirements on developers of large-scale AI models. It applies to “covered models” and “covered model derivatives” defined as AI models trained using more than 10^26 integer or floating-point operations with a development cost exceeding $100 million, or models created by fine-tuning a covered model using at least 3 x 10^25 operations and costing over $10 million. These thresholds will apply until 1 January 2027, after which new limits may be set by the Government Operations Agency.

Notably, under the EU AI Act, which took effect in August, models trained with over 10^25 floating-point operations are presumed to ‎involve systemic risk and face additional obligations.

If signed into law, the AI Safety Act will become enforceable on 1 January 2026.

 

Key provisions of the AI Safety Act include:
  • Cybersecurity protection & “kill switch”: Implement safeguards against unauthorized access and ensure the model can be fully shut down, considering potential disruptions to critical infrastructure.
  • Safety and security protocol: Maintain a separate, written safety and security protocol, subject to annual reviews.
  • Third-party audits: Conduct annual audits to assess compliance, starting in 2026, with publicly shared reports.
  • Incident reporting & compliance: Report safety incidents within 72 hours and submit annual compliance statements.
  • Risk assessments: Conduct pre-release assessments of potential harm the model could cause.
  • Compliance statements: Annually submit signed compliance statements from senior officers to the Attorney General, detailing risk assessments and mitigation efforts.
  • Whistleblower protections: Employees involved in AI development can report concerns without fear of retaliation, and developers must provide clear notices about whistleblower rights.

The AI Safety Act also extends obligations to operators of “computing clusters” (high-performance computers networks used for AI training). Operators’ obligations include verifying customer identities and business purposes, retaining records and access logs, maintaining the ability to shut down resources and implementing industry-standard best practices.

Violators can face significant penalties, including civil fines of up to 10% of the AI model’s training costs for the first violation, and up to 30% for subsequent violations. Additional consequences include fines, injunctive relief, damages, and attorneys’ fees.

 

Other AI-Related Bills Awaiting Approval

In addition to the AI Safety Act, California lawmakers have passed several other bills targeting specific AI-related risks across various sectors:

  • SB 942: Requires companies to provide AI detection tools at no charge to the public, enabling individuals to distinguish between AI-generated content and reality.
  • AB 1836: Requires permission from a deceased’s estate before using their voice or visual likeness to create an AI-generated “digital replica”.
  • AB 2602: Prohibits the use of digital replicas for the performance of personal of professional services in certain circumstances.
  • SB 976: Aims to protect children from social media addiction and requires social media platforms to turn off algorithmic curation of content for minors unless parents or guardians give permission. By default, minors would see a chronological feed of recent posts from accounts they follow. The bill also restricts notifications during school hours and between midnight and 6 a.m.
  • AB 1831: criminalizes the creation of child pornography using generative AI.
  • AB 2355: Mandates political campaigns to disclose the use of AI in advertising.
  • AB 2839: Targets creators of deceptive AI-generated content related to elections and gives power to courts to order the removal of such content or impose damages.
  • AB 2655: Obligates large online platforms to label or block deepfakes during specified periods before and after an election and develop procedures for reporting such content.
  • SB 896: Forces government agencies to assess the risks associated with using generative AI and to disclose when AI technology is in use.
  • SB 892: Establishes standards for government agencies contracts involving AI services.

 

Companies in the AI field should evaluate their exposure to these upcoming regulatory frameworks, as well as to additional regulatory frameworks that were enacted in other US states, such as the Colorado AI Act.

Feel free to contact us if you have any questions regarding these new laws ‎and their practical implications.

 

Search by +