The California government is poised to reassess federal designations of supply-chain risks for businesses, a move sparked by the Department of Defense’s recent classification of San Francisco-based AI firm Anthropic. Governor Gavin Newsom signed an executive order on Monday that mandates state review of such federal designations before any decision is made regarding business relations with affected companies.
This initiative follows a conflict between Anthropic and the Defense Department over contract stipulations that prohibit military use of the company’s AI systems for domestic mass surveillance and fully autonomous weaponry. By labeling Anthropic a supply-chain risk, the Defense Department effectively barred the startup from competing for specific military contracts and subcontracts. A recent court ruling issued a temporary injunction to halt the enforcement of this designation.
Newsom’s executive order aims to provide guidelines on the utilization of AI technologies by state employees while simultaneously promoting their adoption across various departments. California, home to many of the largest AI companies and a leading state in AI regulatory measures, is taking strides to ensure responsible AI deployment.
The order outlines several mandates for state agencies, coinciding with the development of Poppy, a generative AI assistant designed for state employee use. Over 20 California departments and agencies are involved in this initiative, and several others are currently deploying AI to assist employees and support efforts to help homeless individuals and businesses. As state courts and city governments increasingly turn to AI technology, the timing of the executive order is particularly significant.
Newsom’s office criticized the previous administration under President Donald Trump for rolling back protections against potential AI harms. “Unlike the Trump administration, California remains committed to ensuring that AI solutions adopted and deployed by California… cannot be misused by bad actors,” the governor’s office stated in a press release announcing the order.
At the federal level, Trump had signed executive orders aimed at discouraging state-level AI regulations while promoting federal agency adoption of the technology to streamline processes, including those related to Medicare. Last month, the White House unveiled an AI policy framework, which the president hopes Congress will consider. This proposal advocates a light regulatory touch, neglecting to address critical issues such as bias, discrimination, and civil rights.
This latest executive order is the second one signed by Newsom concerning AI. Earlier in 2023, he signed an order focusing specifically on generative AI, similar to the technology powering platforms like ChatGPT and Midjourney. That order also called for enhanced AI integration within state agencies while instituting necessary safeguards.
As California moves forward with its AI initiatives, the implications of these policies are being closely monitored by union leaders and tech industry stakeholders. Union representatives have expressed that they will not endorse Newsom’s presidential candidacy without additional worker protections concerning AI technologies. Conversely, major tech investors are actively seeking to influence California’s political landscape in the lead-up to the midterm elections.
As the state navigates the complexities of AI regulation and deployment, the interplay between federal and state policies will be crucial in shaping the future landscape of artificial intelligence in the United States.
See also
OpenAI’s Rogue AI Safeguards: Decoding the 2025 Safety Revolution
US AI Developments in 2025 Set Stage for 2026 Compliance Challenges and Strategies
Trump Drafts Executive Order to Block State AI Regulations, Centralizing Authority Under Federal Control
California Court Rules AI Misuse Heightens Lawyer’s Responsibilities in Noland Case
Policymakers Urged to Establish Comprehensive Regulations for AI in Mental Health




















































