California Governor Vetoes Controversial AI Safety Bill, Calls For Science-Based Solutions
California Governor Gavin Newsom vetoed an artificial intelligence safety bill following objections from the tech industry, which claimed the legislation could significantly hinder innovation.
The governor has sought the expertise of the world's leading GenAI specialists to set up guardrails for California's GenAI deployment, focusing on a science-based analysis of frontier models and related risks. Newsom said he will work with the legislature on the issue in the upcoming session.
The bill, known as SB 1047, aimed to require developers of large artificial intelligence models and the companies that provide the computing power, to establish safeguards to prevent serious harm. Additionally, it proposed the creation of a state board to oversee the development of these models.
While AI has transformed the way we create text, photos, and videos in response to open-ended prompts, sparking excitement, it has also raised concerns about job loss, electoral disruption, and the potential for technology to overpower humans in catastrophic ways.
According to supporters of the safety proposals, Newsom's decision is a setback for regulating a fast evolving industry that currently lacks oversight. They argue that the bill, if passed, would have been a first of its kind regulation on large-scale models and could have inspired further AI safety legislation in the country, Associated Press reported.
Newsom stated that the bill focused solely on the most expensive and large-scale models, and created a regulatory framework that might give the public a false sense of security about managing this rapidly evolving technology. He added that smaller models could prove just as dangerous and hinder innovation.
"While well-intentioned, SB 1047 does not take into account whether an AI system is deployed in high-risk environments, involves critical decision-making or the use of sensitive data," Newsom said in a statement. "Instead, the bill applies stringent standards to even the most basic functions — so long as a large system deploys it. I do not believe this is the best approach to protecting the public from real threats posed by the technology."
The bill's author, Democratic State Senator Scott Wiener, criticized the veto, saying that it could make California vulnerable as the companies involved in creating "extremely powerful technology face no binding restrictions." He pointed out that "voluntary commitments from the industry are not enforceable and rarely work out well for the public," Reuters reported.
Wiener emphasized the importance of the legislation, arguing that it would have safeguarded the public before advancements in artificial intelligence made it uncontrollable.
Though Newsom agreed that the state should not "wait for a major catastrophe to occur before taking action to protect the public," he countered that he cannot settle "for a solution that is not informed by an empirical trajectory analysis of AI systems and capabilities."
The bill had sparked a debate within the tech community concerning whether the legislation would drive AI innovation out of California or reduce threats posed by its unchecked advancement. Recently, more than a hundred Hollywood stars, including Shonda Rhimes and Mark Hamill, voiced their support for the AI safety bill.
In a YouGov poll conducted earlier this month, nearly 80% of voters consented to the California AI safety bill.
OpenAI, the developer of ChatGPT, argued that legislation on this topic should be enacted at the federal level, suggesting that states should refrain from creating their own laws to prevent a patchwork of regulations.
Elon Musk, billionaire tech entrepreneur and owner of xAI, which develops the Grok chatbot, backed the bill. He described it as a difficult decision but maintained that any technology posing a public risk should be regulated — and AI is no exception.
© Copyright IBTimes 2024. All rights reserved.