As October drew to a close, California State Senator Scott Wiener hosted his annual pumpkin carving event, where he discussed his legislative achievements for 2025. This year’s gathering took place amidst a backdrop of constituents and anti-transgender protestors, highlighting the contentious atmosphere surrounding his political activities.
Among his notable accomplishments was the passage of SB 53, the Transparency in Frontier Artificial Intelligence Act, which was signed into law by Governor Gavin Newsom just a month prior. This legislation makes California the first state to regulate frontier AI, a category that includes some of the most advanced AI models, such as ChatGPT and Claude.
With California being home to nearly all major AI companies, this law effectively acts as a de facto standard for AI regulation across the United States. Consequently, it stands as one of the closest moves towards federal regulation the U.S. has seen in the near future.
The law mandates that companies like OpenAI, Anthropic, and Google DeepMind must publish their safety frameworks, report critical incidents, and ensure protections for whistleblowers who raise concerns about potential catastrophic risks. As Wiener embarks on a campaign for Congress, questions arise within the AI community: will the architect of California’s AI law advocate for similar regulations in Washington, D.C.?
Wiener acknowledges the challenges of expanding his AI regulation efforts on a national level, expressing hope that SB 53 could serve as a federal standard. “Congress has struggled with strong comprehensive technology regulation,” he stated. “I hope that changes. I hope to be a part of that change.”
However, AI regulation is not explicitly listed among the priorities on his campaign website, which instead emphasizes issues like democracy, housing, healthcare, public transportation, and clean energy.
The Evolution of AI Regulation in California
The journey to SB 53 was not without its difficulties. Originally, the bill was introduced as SB 1047, known as the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act. According to Seve Christian, then-legislative director in Wiener’s office, SB 1047 contained more stringent requirements for AI companies.
Christian noted, “SB 53 is just a transparency measure to say, ‘We are going to believe you when you say that you are doing your homework.'” The original bill garnered support from various AI safety advocates, including notable figures like Geoffrey Hinton, Stuart Russell, Yoshua Bengio, and Elon Musk, who argued that it would introduce clear safety standards.
However, significant opposition arose from major tech companies such as OpenAI, Meta, and Microsoft, who feared that strict regulations could stifle U.S. innovation in AI, especially in the competitive landscape against China. Ultimately, SB 1047 was passed but vetoed by Newsom in 2024.
Key Differences Between SB 1047 and SB 53
| SB 1047 | SB 53 |
| Applied to companies with a training cost over $100 million for their models. | Applies to companies that have over $500 million in annual revenue. |
| Required developers to build in “kill-switches” into the AI models. | Excludes the “kill-switch” requirement. |
| Introduced liability for companies responsible for “mass casualty events” of over $500 million in damages. | Removed liability requirement. |
| Included third-party audits. | Excludes third-party audits. |
| Included pre-deployment testing. | Excludes pre-deployment testing. |
The transition from SB 1047 to SB 53 illustrates the significant compromise required even in a state like California, where many AI companies are based. If California struggles to pass even transparency requirements, the prospects for comprehensive federal regulation appear dim.
According to Riana Pfefferkorn, a policy fellow at Stanford’s Institute for Human-Centered Artificial Intelligence, the landscape for federal AI regulation is complex. “On one hand, there are attempts to regulate specific applications of AI; on the other, there is a push for broader, comprehensive regulations,” she commented.
Pfefferkorn also noted that the political climate complicates matters, with many lawmakers fearing that regulation might hinder innovation and national security. As the debate continues, the effectiveness of SB 53 may serve as a litmus test for future AI legislation on a national level.
As Wiener moves forward in his political career, the AI community will be closely watching whether his influence will extend from California to the halls of Congress.
Italy Passes Landmark AI Law Aligning with EU Act, Boosting Demand for Employment Lawyers
Major Insurers AIG and Great American Exclude AI Coverage Amid Rising Risks
Experts Urge AI Literacy for Children Amid Rising Automation Risks at Adab Festival
Trump Pauses Executive Order to Block State AI Regulations Amid Industry Pushback
New Hampshire Schools Tackle AI-Assisted Bullying and Deepfakes with Innovative Strategies



















































