What New York Can Learn from California’s AI Safety Bill Failure

In September 2024, California Governor Gavin Newsom vetoed California’s highly controversial AI Safety Bill (SB 1047). The bill was introduced with good intentions: Regulate a fast-growing AI industry to ensure public safety is not compromised. But whatever the intentions, the provisions in the bill itself, coupled with a poor process, led to its downfall.

New York legislators are poised to consider similar legislation in Albany next session, and Tech:NYC will aim to support AI regulation that protects consumers while allowing the industry to continue driving economic growth for the state. To achieve this balance, we can take several lessons from California’s failed bill. 

The main theme of California’s AI Safety bill was that it would have required large AI developers to submit safety plans to the state attorney general, who could hold them liable if those AI models were to cause harm or imminent threats to public safety. It also would have required those companies to be able to turn off the AI models if they started posing a danger (a so-called “kill switch”).

But the California AI Safety Bill failed for two main reasons:

  • The bill’s provisions were flawed:

    • The bill lacked specificity. It used vague language that left too much up to interpretation, such as prohibiting developers from releasing large AI models “if there is an unreasonable risk” that the technology “will cause or materially enable a critical harm.” 

    • The bill also discouraged larger AI developers from making their models public, which would have hindered the progress of many startups and smaller AI companies which rely on those models for their own work.

    • The bill only applied to the largest AI companies, and Gov. Newsom noted that a risk-based approach, in which levels of regulation would vary based on the level of risk involved with a particular application of AI, would be more effective.

    • The bill’s requirement of a kill switch proposes blocking the technology versus a nuanced approach to regulating it. It is also a feature that is extremely burdensome to incorporate, and proves a gap between how government regulators believe technology works, and how it actually works.

  • The bill’s process was flawed.

    • The bill did not have the backing of many of California’s most prominent elected officials, as Representative Nancy Pelosi and other members of Congress sent letters to Gov. Newsom urging him to veto the bill.

    • Even one of Gov. Newsom’s closest allies, San Francisco Mayor London Breed, opposed the bill, arguing it would undermine the city’s economy.

What can New York learn from this? 

As access to AI tools has grown in New York, lawmakers have introduced bills aimed at regulating the industry. For example, in 2021, the City Council passed local law 144, which requires disclosures of and bias audits for AI tools used in hiring processes. A similar proposal also exists today in the state legislature (A9315/S762). 

As New York considers other bills aimed at regulating the AI industry, such as the LOADinG Act, we find a few ingredients to be of particular importance:

  • A risk-based approach to AI legislation.

    • As Tech:NYC mentioned in our testimony at an Assembly hearing in Albany in September, countless businesses use AI in ways that either do not directly impact consumers or that actively benefit everyday New Yorkers. 

    • A risk-based approach varies levels of regulation based on the risk level of decisions involving AI. An AI-powered chatbot that answers customer questions about returning a sweater, for example, should not have to face the same regulations as an AI product that suggests treatment options for a serious disease.

  • A process for introducing laws that encourages collaboration and more open lines of communication between the private and public sector and other stakeholders.

    • It’s imperative that elected officials work with the tech industry to create these laws, so that they can realistically be implemented and New York’s tech economy can continue to thrive.

    • Hearings like the one Tech:NYC participated in in September are a great first step in gathering feedback on these proposals.

  • The use of specific language to set clear definitions for terms such as, “artificial intelligence,” “algorithmic discrimination,” and “consequential decision.”

To be clear, Tech:NYC and our members support AI regulation. We want regulation that spurs innovation, protects consumers, and helps the tech industry continue to drive economic growth. 

Some states are exploring bills that Tech:NYC is looking at with particular interest. For example, there are ideas around: 

  • Mitigating the risks associated with high-risk AI systems used in areas like education, employment, finance, and healthcare.

  • Regulating AI systems by ensuring transparency, accountability, and protections against algorithmic discrimination, while setting standards for developers and deployers of such systems.

It’s clear that New York is turning into a global leader in artificial intelligence. More than 1,000 AI-related companies are based in NYC (including 35 AI unicorns that have raised a total of $17 billion), and the City is home to over 40,000 AI professionals. Empire AI, a $400 million private-public investment to make New York the national leader in AI research and development, will continue to drive this growth. 

New York is in prime position to lead the AI race. Let’s keep the momentum going, learn from the mistakes of other state lawmakers, and move forward with more targeted, specific laws that advance our AI progress.

Previous
Previous

Highlights from Our Fifth Founder House

Next
Next

5 Key Lessons on AI Adoption for Nonprofits