California State Senator Scott Wiener is once again at the center of the fight over how to regulate artificial intelligence. After his high-profile but ultimately vetoed bill SB 1047 last year, Wiener has returned with a new proposal, Senate Bill 53 — aimed at compelling the largest AI companies to publicly disclose safety risks. Unlike its predecessor, SB 53 has drawn less fire from Silicon Valley and now awaits Governor Gavin Newsom’s signature or veto in the coming weeks.
SB 53 would require AI giants such as OpenAI, Anthropic, Google, and xAI — companies earning more than $500 million annually — to publish safety reports on their most powerful models. These reports would cover how systems are tested for catastrophic risks like bioweapons creation, cyberattacks, or large-scale harms. The measure also sets up secure channels for employees to raise safety concerns directly to state officials and launches a state-operated computing cluster, CalCompute, to give researchers non-Big-Tech access to AI infrastructure.
This time around, some major players are cautiously supportive. Anthropic endorsed the bill outright, and a Meta spokesperson called SB 53 “a step in the right direction” toward balancing guardrails with innovation. Even critics like OpenAI and venture firm Andreessen Horowitz have stopped short of the scorched-earth campaigns that helped sink SB 1047, though they continue to argue that AI regulation should be handled at the federal level.
Wiener frames the new legislation as a pragmatic way to ensure “safe innovation” without choking off California’s tech engine. He says years of watching Big Tech lobby against federal oversight convinced him that states must lead. “I’m not anti-tech,” he told TechCrunch, “but this is an industry we shouldn’t trust to regulate itself.” By focusing narrowly on the gravest risks rather than broad liability, SB 53 reflects lessons learned from his earlier effort.
Governor Newsom’s decision could set a national precedent. If signed, SB 53 would be among the first state laws to impose transparency requirements on cutting-edge AI labs — a move that could influence how the rest of the country approaches AI safety. For Wiener, representing San Francisco at the epicenter of AI innovation, it’s about striking a delicate balance: keeping California at the forefront of technology while making sure powerful AI systems don’t put the public at risk.
source: techcrunch
