A controversial bill to enforce safety standards for large-scale artificial intelligence models has now passed the California State Assembly by a vote of 45 to 11. After a 32-1 vote in the Senate in May, SB-1047 now faces just one more procedural vote in the Senate before it heads to Governor Gavin Newsom's desk.
As we've previously explored in detail, SB-1047 requires developers of AI models to implement a “kill switch” that can be activated if the model introduces “novel threats to public safety,” particularly if it operates “with limited human oversight, intervention, or monitoring.” Some have criticized the bill for focusing on outlandish risks of imagined future AI rather than real, present harms from AI use cases like deep fakes or misinformation.
In announcing the bill's passage on Wednesday, bill sponsor and Senator Scott Weiner cited support from AI industry leaders such as Geoffrey Hinton and Yoshua Bengio (both of whom also signed a statement last year warning of the “risk of extinction” posed by rapidly evolving AI technology).
In a recent editorial in Fortune magazine, Bengio said the bill “contains the bare minimum for effective regulation of groundbreaking AI models” and that by focusing on large models (which cost over $100 million to train) it would avoid impacting smaller startups.
“We cannot allow companies to grade their own homework and simply make nice-sounding assurances,” Bengio wrote. “We do not accept this in other technologies such as pharmaceuticals, aerospace and food safety. Why should AI be treated any differently?”
But in a separate editorial in Fortune magazine earlier this month, Fei-Fei Li, a Stanford computer science professor and AI expert, argued that the “well-intentioned” legislation “will have significant unintended consequences, not just for California but for the entire country.”
The bill's liability for the original developer of a modified model will “force developers to back off and act defensively,” Li argued. This will limit the open-source sharing of AI weights and models, which will have a significant impact on academic research, she wrote.
What will Newsom do?
A group of California business leaders urged Newsom in an open letter Wednesday to block the “fundamentally flawed” law that “regulates model development, not abuse.” The law would “impose burdensome compliance costs” and “slow down investment and innovation through regulatory ambiguity,” the group said.
If the Senate confirms the Assembly's version, as expected, Newsom will have until Sept. 30 to decide whether to sign the bill and sign it into law. If he vetoes it, the Legislature could override the bill with a two-thirds majority in each chamber (which is entirely possible, given the overwhelming votes for the bill).
At a symposium at UC Berkeley in May, Newsom expressed his concern: “If we regulate too much, if we indulge too much, if we chase a shiny object, we could put ourselves in a dangerous position.”
At the same time, Newsom said those worries about overregulation are balanced by the concerns he's hearing from leaders in the AI industry. “When the inventors of this technology, the godfathers and fathers, say, 'Help, you need to regulate us,' that's a very different environment,” he said at the symposium. “When they rush to educate people and basically say, 'We don't really know what we've done, but you need to do something about it,' that's an interesting environment.”