The most recent attempt by the state of California to limit artificial intelligence is causing divisions among the researchers and businesses working on the technology, while also revealing more fundamental fears about its possible risks.
A measure put up by San Francisco Democrat and potential successor to Nancy Pelosi, state senator Scott Wiener, is at the heart of the dispute. The bill would authorize the attorney general to sue businesses for undefined damages in the event that their goods cause harm, and it would require corporations to evaluate their biggest AI models for potential safety threats, such as cyberattacks or the development of bioweapons.
There are hundreds of similar artificial intelligence measures circulating in state legislatures this year. Despite Sacramento’s growing reputation as a national leader in tech regulation, the proposal is dividing the state’s prominent tech industry leaders. This comes at a time when Congress is deeply divided.
Businesses are also competing to influence legislation that may determine their success or failure as the artificial intelligence market grows into a multi-billion dollar monster.
Lobbyists for major tech companies like Google and Meta have argued that the law will be too cumbersome and slow down innovation, reflecting a general aversion to government control. There are others who share the views of prominent venture investor Marc Andreessen and his “effective accelerationist” agenda of unfettered technological progress.
In contrast, there is a motley crew of nonprofits, venture capital firms, and smaller artificial intelligence development groups, with billionaire ex-Facebook executive Dustin Moskovitz providing funding for several of them. Some people, like Moskovitz, belong to the so-called effective altruism movement, which seeks to maximize the public good through the use of data and logic, but which considers AI to be an equally grave threat as nuclear war.
This split exemplifies the deep divisions that exist in the tech industry on the potential dangers and future of artificial intelligence.
“There are various opinions within the tech community — some quite responsible and some insane,” commented Chris Larsen, a bitcoin tycoon who has invested substantially in recent years to center San Francisco’s administration. In an open letter that was signed last year by prominent figures including Elon Musk, Larsen pleaded for the end to extensive AI experiments due to safety concerns. “It saddens me that there is so much resistance to any AI safety measures.”
Critics of Wiener’s measure say it would impede a potential business, especially startups that are just starting out and would have a difficult time getting past such red tape.
“Regardless of the intention, the end result will be to make other companies struggle to compete with OpenAI,” stated Alliance for the Future Executive Director Brian Chau, whose new group in Washington is working to stop lawmakers from overregulating the technology. According to Chau, his organization can help quell “the escalating panic around AI” because of its connections to the effective accelerationists.
That claim is rejected by Wiener and his associates, who cite backing from smaller companies such as Imbue, which just received $200 million to give computers what it calls “human values.”
Supporters of the bill argue that it is a reasonable precaution to take in the face of the most serious dangers posed by artificial intelligence, such as hostile assaults on power grids or a mad algorithm out to wipe humanity off the face of the earth. Companies could put profit before public welfare, they say, therefore the government should intervene now to limit them.
“Right now, we’re constructing an industry. This is not an app that will deliver food,” stated Matt Boulos, head of policy and safety at Imbue. “In order to prevent the risks associated with AI from coming to fruition, we must establish incentives that discourage their development.”
If the law reaches Gov. Gavin Newsom’s desk, he will have to consider these opposing viewpoints, despite his past support for tech industry concerns about overregulation. Like his usual stance on proposed legislation, Newsom’s staff chose not to comment.
It is preferable to permit such self-regulation, according to major corporations like Google, who have implemented internal protections while refining their products. Lawmakers such as Wiener contend that the state ought to step in and prescribe a prudent equilibrium that encourages the advancement of AI while preventing its unchecked proliferation.
An array of ideas on how to manage a problem is always present, according to Wiener, who has been around the tech sector for many, many years as a policymaker. “Noting to leaders like OpenAI’s Sam Altman, it’s a bad look to oppose safety legislation, especially when so many prominent voices in the AI space have been vociferous in inviting regulation,” he continued.
A number of the groups that have been vocal in support of the bill have also played key roles in shaping it. “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.” That was the public, one-sentence call to action pushed last year by a charity called the Center for Artificial Intelligence Safety.
Among the signatories were Democratic lawmakers like Ted Lieu of California and business magnates like Bill Gates and Moskovitz, whose charity organization Open Philanthropy has supported AI researchers such as the Center for AI Safety.
According to CAIS Director Dan Hendrycks, “the letter basically solved a bit of a collective action problem of: a lot of people believe this but they’re afraid to say it.” Further investigation revealed that more individuals are worried about the issue than anyone had anticipated.