Rebecca Bauer-Kahan, a member of the California State Assembly, thinks it’s time for the Golden State to take the lead in regulating AI.
The most extensive government effort to date to regulate a rapidly developing technology was signed this week by Vice President Joe Biden. The wide bill was applauded by both businesses and consumer groups as a necessary first step, but its effects will be muted unless Congress takes more decisive action.
Enter California – home of the globe’s most powerful tech corporations and the Democrats who have become increasingly eager to go up against them.
Bauer-Kahan, a Democrat whose Bay Area district includes major tech hubs, is championing an effort to prohibit “algorithmic discrimination” — regulating automated decision tools that make a determination that may have a significant effect on a person’s life, such as in hiring, medical decisions, or parole rulings.
Although her bill did not pass this session, it received widespread attention from tech firms and trade associations. We met up with the senator immediately after she returned from her trip to D.C. for Biden’s announcement to discuss the future of AI in California and the next steps for her legislation.
You recently returned from a trip to Washington, DC, where you met with AI regulation officials at the White House. What do you think California’s place in this arena is after that trip and after Biden’s wide executive order?
According to me, [Biden’s directive] is highly supplementary. I’m confident that our efforts will help pave the way for a market for trustworthy AI.
Is California being looked upon as a leader on this subject by Washington?
I don’t think Washington is taking cues from California. Without a doubt, in my mind, they want to be the ones in charge of the action, but I don’t believe they can.
In my opinion, they would welcome state intervention. It does make it more difficult for corporations to operate across state lines because of the regulatory patchwork that results. I agree that a unified regulatory framework would be beneficial, but no one has faith that Washington, DC, will get there soon enough. The progress of AI is astounding.
The scope of AI is enormous. Why did you choose to focus Assembly Bill 331 on the issue of algorithmic bias?
“I have been on the privacy committee for the entirety of my term, which is now five years, and I had seen our former chair, Assemblymember [Edwin] Chau, do his proposals, and every single one of them failed on this subject of ‘What is AI?’ and the definition of AI.
Since we can hopefully all agree that we shouldn’t be discriminating in these highly significant areas, I viewed this as the low-hanging fruit and the most pressing matter to address first.
I think a much broader effort at AI is more harder, because we don’t really know where it’s heading at this time.”
However, your law did not pass, despite its limited scope and obvious benefits. What does that mean for the future of rules making, if anything?
Since it was rejected by the appropriations committee, I consider that to be a whole different animal. It received more support in committee than I had expected; all Democrats supported it, including some moderates, and Republicans serving on the relevant panels offered up mostly positive remarks about the bill without actually endorsing it.
I still have faith that we can reach an agreement on this.
Tell us what you learnt from this round’s failure and how you plan to apply it to the next.
“It was a pretty terrible financial year. And if we want anything we do in this area to have any real impact or be enforced effectively, we’ll need to invest in training and equipping our agencies to do work they aren’t already doing.
ChatGPT had not yet been dropped a year ago when we first started writing laws.
A year later, I believe the public has a much clearer idea of how pervasive this issue is and that the government is falling behind.
Some would tell me, “This is premature,” but I would respond, “Fifty percent of these decisions are being made by AI today.” And that was a year ago. I would guess it’s even higher now. So, it’s imperative that we keep up with the pace of technological development.
Some of California’s tech elite have criticised Biden’s directive as an attempt to stifle the state’s well-deserved reputation as an innovation hotbed. Just what is your reaction to that?
At the outset of my career, I worked as a regulatory attorney, and that experience taught me that it is their responsibility to foster creativity and innovation, establish sustainable enterprises, and provide good jobs for Californians. And it is the government’s responsibility to safeguard people’s safety. And I am convinced that we can achieve both of those goals by being strategic and methodical in our approach.
If California doesn’t control AI, what will happen?
My goal in introducing AB 331 was to encourage other states to follow California’s lead and adopt similar legislation, avoiding the current system of “patchwork regulation.”
But I worry that if we let other states go first, they will establish a norm that isn’t up to California’s standards or isn’t as flexible in permitting innovation as California’s.
Is it possible that similar ballot initiative pressure to California’s privacy regulations will be needed to establish AI legislation in the state?
A week ago, I voiced my opposition to this happening during a panel discussion. However, I believe that if we do not act, we will face dire consequences, since I recently saw the results of a poll showing that 90% of Californians want us to control AI.
If the electorate is there, they will take action even if we don’t. I have faith in that.”
In 2019, what will you do to control artificial intelligence?
I assure you that AB 331 will be introduced again. The bill will return, but it won’t be the same one.
We’re looking a lot at what is federally preempted and what isn’t.
I anticipate there will be a great deal of stuff in here.
You make an excellent point regarding the necessity for a healthy equilibrium between innovation and regulation. I’ll tell you the other day I had to help my son with his maths, and I used Chat GPT to help me learn how to do it.”
How successful was it?
And yet, “it did, but it… it was not advanced maths.”