Stay informed with free updates
Simply sign up to the Artificial intelligence myFT Digest — delivered directly to your inbox.
The writer is a general partner at Andreessen Horowitz
Artificial intelligence has given the US technology sector a shot of adrenaline and the world a jolt of excitement. Every day, we’re interacting with tools that would’ve looked like science fiction only a few years ago. Generative AI can do everything from providing your child with a personalised tutor to designing novel medicines.
Unfortunately, all this is at risk thanks to a new bill in California known as SB-1047, which threatens to stifle AI development.
If it passes it will have a chilling effect not only on AI investment but the entrepreneurship that drives technological advancement around the world.
More than 600 AI bills have been proposed by state legislators in the US this year but SB-1047 goes further than most. It requires developers to certify that their AI models cannot be used to cause harm. This is an unobtainable requirement.
AI models can be modified infinitely. Open-source models allow the public to access and build on their source code, meaning developers could bear responsibility for third parties. It is impossible to guarantee that no version of an AI model could ever cause harm. The bill is a fundamental misunderstanding of the technology.
The bill also claims to only target large tech companies. However, it uses a $100mn “training cost” threshold to determine a company’s size. AI development costs run to billions, meaning this relatively low bar could affect start-ups. There is no clear definition of training costs, either. This is especially problematic as we are still in the early research phase of AI, when terms like pre-training and post-training lack universal definitions.
The California state senate has already approved a version of the bill. In August, this poorly considered, deeply disruptive proposal could move to the desk of Governor Gavin Newsom, who could sign it into law.
The AI community has tried to sound the alarm. More than 100 AI leaders have signed an open letter opposing the bill.
Yann LeCun, Meta’s chief AI scientist, has warned that the bill’s “cascading liability clauses would make it very risky to open-source AI platforms . . . Meta will be fine, but AI start-ups will just die.”
As an AI investor, I’ve already witnessed the ill effects of the bill first hand. Promising open-source start-ups are considering relocating overseas. We risk a brain drain, with top talent fleeing to more accommodating jurisdictions.
The global implications are stark. While China’s policymakers collaborate closely with AI researchers, US legislators, however well intentioned, are writing laws without serious collaboration from AI’s expert developers, investors or researchers. It’s akin to writing medical regulations without consulting doctors.
The consequences could be severe and widespread. As US AI development stalls, global competitors will seize the advantage and groundbreaking ideas will be stifled before they have a chance to develop.
We need rational AI regulation. We should focus on misuses of AI models by enforcing greater penalties for AI-enabled crimes, such as non-consensual deep fakes. We could establish industry-wide standards for transparency around how big AI models are trained too, and fund security and safety research at public universities.
These measures would promote responsible development while preserving the flexibility needed for breakthroughs.
California’s well-intentioned but misinformed anti-AI bill threatens to undermine the US tech industry just as the future of the technology is at a crossroads. We need our leaders to recognise that the time for informed, united action on AI regulation is now.