Connect with us

Hi, what are you looking for?

AI Government

California’s Newsom Orders Review of AI Contract Rules Amid Anthropic Controversy

California Governor Gavin Newsom signs an executive order to regulate state AI usage, boosting ethical guidelines and vetting tools amid federal challenges to Anthropic’s contracts.

This story was originally published by CalMatters. Sign up for their newsletters.

California Governor Gavin Newsom signed an executive order on Monday aimed at establishing a framework for the use of artificial intelligence (AI) by state agencies, particularly in light of the recent designation of San Francisco-based AI company Anthropic as a “supply-chain risk” by the U.S. Department of Defense. This designation, which followed a dispute over contract clauses that restricted the military’s use of Anthropic systems for domestic mass surveillance and fully autonomous weaponry, effectively limits the company’s ability to compete for certain federal contracts. A recent court ruling has temporarily blocked the designation.

The new executive order by Newsom reflects a growing concern regarding the implications of AI technology, as it seeks to guide state employees in the ethical deployment of AI tools while promoting their accelerated use. California, home to many leading AI firms, is already a frontrunner in AI regulatory measures.

The order directs state agencies to develop standards related to AI’s potential to generate child sexual abuse material, infringe upon civil liberties, and violate discrimination laws. It aims to ensure that state employees have access to “vetted GenAI tools” and requires an update to the State Digital Strategy to explore ways generative AI can enhance government transparency and accessibility of services for Californians. Additionally, the order mandates guidance on watermarking AI-generated imagery and videos.

This initiative comes as over 20 state departments are working on Poppy, a generative AI assistant designed to assist state employees, while half a dozen other agencies are experimenting with AI to support various social initiatives, including homelessness assistance and business support. Newsom’s office has indicated that the current federal administration, particularly during Donald Trump’s presidency, has rolled back protective measures related to AI, prompting California to take a more proactive stance.

“Unlike the Trump administration, California remains committed to ensuring that AI solutions adopted and deployed by the state cannot be misused by bad actors,” the governor’s office stated in a press release detailing the executive order.

At the federal level, the previous administration had implemented executive orders that discouraged state-level AI regulation and encouraged federal agencies to use AI for streamlining processes and reducing regulatory burdens. The White House recently introduced an AI policy framework aimed at Congress, advocating a relaxed regulatory approach without addressing critical issues such as bias and discrimination.

This is not Newsom’s first foray into AI governance; earlier in 2023, he signed an executive order focused specifically on generative AI, which powers applications like ChatGPT and Midjourney. That order similarly called for increased AI utilization by state agencies alongside the establishment of appropriate safeguards.

The governor’s approach to AI regulation is closely monitored by both union leaders and technology donors, especially as he faces increasing pressure for worker protection measures related to AI technology ahead of the upcoming midterm elections. Union representatives have stated that they will not support Newsom’s potential presidential run without strong commitments to worker rights in the face of advancing technology.

As California continues to shape its AI regulatory landscape, the implications of such measures could reverberate beyond state lines, influencing national discussions on AI governance and ethical practices within the rapidly evolving tech sector.

See also
Staff
Written By

The AiPressa Staff team brings you comprehensive coverage of the artificial intelligence industry, including breaking news, research developments, business trends, and policy updates. Our mission is to keep you informed about the rapidly evolving world of AI technology.

You May Also Like

AI Regulation

Security flaws in Anthropic's Claude Code expose a bypass for safety protocols, enabling unauthorized curl command execution through prompt injection attacks.

Top Stories

Anthropic's Claude Code source code leak exposes 1,900 TypeScript files on GitHub, raising competitive stakes in the AI landscape amid security concerns.

AI Government

Anthropic partners with Australia to enhance AI safety and provide economic index data, shaping policies on AI's impact across key industries.

Top Stories

Anthropic integrates its new Computer use feature into Claude Code, allowing direct computer interface interaction and enhancing AI functionality for autonomous operations.

AI Technology

AMD announces its Advancing AI Summit for July 22-23, 2026, unveiling a five-year roadmap for AI innovation and practical resources including free GPU hardware.

AI Regulation

California enacts comprehensive AI regulations by 2026, including the Transparency in Frontier AI Act, to ensure accountability and safety amid federal standardization efforts.

AI Regulation

California Governor Gavin Newsom signs a groundbreaking executive order mandating AI companies to enforce safety and privacy safeguards before contracting with state agencies.

Top Stories

Anthropic's Claude autonomously developed a full software project, including a digital audio workstation, in under four hours for just $124, setting new standards in...

© 2025 AIPressa · Part of Buzzora Media · All rights reserved. This website provides general news and educational content for informational purposes only. While we strive for accuracy, we do not guarantee the completeness or reliability of the information presented. The content should not be considered professional advice of any kind. Readers are encouraged to verify facts and consult appropriate experts when needed. We are not responsible for any loss or inconvenience resulting from the use of information on this site. Some images used on this website are generated with artificial intelligence and are illustrative in nature. They may not accurately represent the products, people, or events described in the articles.