In a significant move towards regulating artificial intelligence, the California State Assembly and Senate have passed the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act (SB 1047). This groundbreaking legislation, one of the first of its kind in the United States, now awaits Governor Gavin Newsom’s signature to become law.

Key Provisions of SB 1047

The bill mandates that AI companies operating in California implement several safety measures before training advanced foundation models. These precautions include:

1. Developing capabilities to quickly and fully shut down AI models

2. Ensuring protection against “unsafe post-training modifications”

3. Maintaining testing procedures to evaluate potential “critical harm” risks

Senator Scott Wiener, the bill’s primary author, described SB 1047 as a “highly reasonable” piece of legislation. He emphasized that the bill simply requires large AI labs to follow through on their existing commitments to test their models for catastrophic safety risks.

Industry Response and Amendments

The bill has sparked intense debate within Silicon Valley and beyond. Notable critics include major AI companies like OpenAI and Anthropic, as well as politicians such as Representatives Zoe Lofgren and Nancy Pelosi. The California Chamber of Commerce has also voiced concerns.

Critics argued that the bill’s initial focus on catastrophic harms could potentially harm small, open-source AI developers. In response to these concerns, several amendments were made to the original bill:

– Replacement of potential criminal penalties with civil ones

– Narrowing of enforcement powers granted to California’s attorney general

– Adjustments to requirements for joining the “Board of Frontier Models” created by the bill

Shifting Stances and Ongoing Debate

Despite initial opposition, some companies have softened their stance following the amendments. Anthropic CEO Dario Amodei stated in a letter to Governor Newsom that the bill was “substantially improved” and that its benefits likely outweigh its costs.

OpenAI, however, has maintained its opposition. The company’s chief strategy officer, Jason Kwon, reiterated concerns in a recent letter to Senator Wiener.

Next Steps

The AI safety bill now moves to Governor Gavin Newsom’s desk. He has until the end of September to make a decision on whether to sign it into law or veto it.

Implications for the AI Industry

If signed into law, SB 1047 would represent one of the first significant regulations of artificial intelligence in the United States. It could set a precedent for other states and potentially influence federal policy on AI regulation.

The bill’s passage through the California legislature highlights the growing concern among policymakers about the potential risks associated with advanced AI systems. It also underscores the challenge of balancing innovation with safety in the rapidly evolving field of artificial intelligence.

As the AI industry continues to grow and evolve, the outcome of this legislation could have far-reaching implications for how companies develop and deploy advanced AI models, not just in California but potentially across the country.

With the deadline for Governor Newsom’s decision approaching, all eyes are on California as it stands poised to potentially usher in a new era of AI regulation and safety measures.

Leave a Reply

Your email address will not be published. Required fields are marked *