California Governor Gavin Newsom has issued an executive order aimed at establishing limits on artificial intelligence (AI) while simultaneously promoting its use among state agencies. The order comes in response to recent federal actions against California-based AI startup Anthropic, which was labeled a supply-chain risk by the Department of Defense (DoD). This designation effectively barred the company from competing for certain military contracts due to ongoing disputes over contract terms that restrict military use of its systems for domestic mass surveillance and fully autonomous weaponry.
The executive order empowers California to conduct its own reviews of such federal designations, allowing the state to make independent decisions about its business relationships with companies like Anthropic. This measure follows a temporary injunction from a judge aimed at blocking the DoD’s supply-chain risk designation, underscoring the contentious nature of federal oversight in the burgeoning AI sector.
Newsom’s order seeks to instate guardrails regarding AI deployment by state employees while encouraging the integration of AI technologies. Among its mandates, the order requires state agencies to develop standards related to AI’s potential for generating content that could violate civil liberties or civil rights laws. Agencies are also tasked with updating California’s Digital Strategy to identify how generative AI can enhance government transparency and accessibility for residents. Additionally, employees will be guided on implementing watermarks on AI-generated content, such as images and videos.
To further bolster the state’s AI capabilities, California’s executive order encourages the development of tools to assist state employees, exemplified by the ongoing work of over 20 departments with a generative AI assistant called Poppy. This initiative comes as various state agencies explore AI to aid in tasks ranging from employee assistance to supporting homeless individuals and businesses.
California has positioned itself as a leader in AI regulation, with many of the world’s largest AI companies based in the state. Newsom’s office criticized the previous Trump administration’s approach to AI regulation, asserting that California aims to ensure AI technologies are implemented responsibly and do not fall into the hands of “bad actors.” The governor’s office emphasized this commitment through a press release accompanying the executive order.
In contrast, the Trump administration had sought to discourage state-level AI regulations, promoting a more lenient federal framework that focused on accelerating bureaucratic processes, including decisions regarding Medicare. This framework has faced criticism for lacking provisions to address concerns related to bias, discrimination, and civil rights.
This executive order marks Newsom’s second action specifically addressing AI, following a 2023 order focused on generative AI technologies like ChatGPT and Midjourney. As the landscape of artificial intelligence continues to evolve, Newsom’s actions are under scrutiny from both labor unions and technology sector stakeholders. Union leaders have indicated they may not support his presidential aspirations without clearer commitments to worker protections against potential job displacement caused by AI advancements.
As California navigates these complex issues, the state’s approach to regulating AI may serve as a model for other jurisdictions grappling with similar challenges. With the increasing integration of AI into both public and private sectors, the outcomes of these initiatives could significantly influence the future trajectory of technological governance across the nation.
See also
OpenAI’s Rogue AI Safeguards: Decoding the 2025 Safety Revolution
US AI Developments in 2025 Set Stage for 2026 Compliance Challenges and Strategies
Trump Drafts Executive Order to Block State AI Regulations, Centralizing Authority Under Federal Control
California Court Rules AI Misuse Heightens Lawyer’s Responsibilities in Noland Case
Policymakers Urged to Establish Comprehensive Regulations for AI in Mental Health




















































