California Governor Gavin Newsom signed executive order N-5-26 on March 30, aimed at regulating the implications of artificial intelligence (AI) in government contracts. This order emphasizes public safety and addresses concerns regarding the potential misuse of AI technologies.
The executive order mandates that entities seeking to conduct business with the California government must clarify their AI usage and policies to mitigate risks associated with the distribution of illegal content, violations of civil rights, discrimination, and harmful bias. This focus on AI procurement allows the state to regulate the types of AI models that can be purchased or contracted, thereby ensuring safety before implementation.
As outlined in the order, this initiative aims to prevent the use of technologies that could disseminate illegal content, including “child sexual abuse material and non-consensual intimate imagery.” The order further establishes that any errors or biases in AI systems must be monitored continuously, even after state approval, to ensure long-term accuracy and reliability.
In a statement issued by Newsom’s office, it was highlighted that California is home to 33 of the top 50 private AI companies globally and leads the nation in AI job opportunities and funding. This executive action comes on the heels of Newsom’s February 2025 launch of Engaged California, the nation’s first digital democracy platform designed to foster constructive public dialogue on significant issues. Initially established as a pilot program in response to the Eaton and Palisades wildfires in Los Angeles, the platform has evolved into a community space for residents to engage with government services and policies.
“After years of development, I am excited to launch this new pilot program to help create a town hall for the modern era –– where Californians share their perspectives, concerns and ideas geared toward finding real solutions,” Newsom stated during the platform’s launch.
The executive order specifies that Engaged California will serve as a tool to assess statewide responses to AI, enabling legislators to gauge public sentiment regarding AI’s impact in various sectors, such as the workforce. This system allows Californians to provide direct feedback on their experiences and input regarding AI usage in government operations.
Moreover, the order calls for the development of an AI-powered website or application pilot, designed to streamline access to organized government services based on life events, including “disaster relief, starting a business, and finding a job.”
To combat misinformation, the order also requires government departments and agencies to watermark AI-generated videos and images. This measure aims to increase public awareness and understanding of AI-generated content and the capabilities of generative AI technologies.
“California leads in AI, and we’re going to use every tool we have to ensure companies protect people’s rights, not exploit them or put them in harm’s way,” Newsom added. “While others in Washington are designing policy and creating contracts in the shadow of misuse, we’re focused on doing this the right way.”
As California continues to spearhead advancements in AI, the implications of this executive order extend beyond state contracts. By establishing a framework for responsible AI usage, Newsom’s administration aims to foster a safer digital landscape, balancing innovation with ethical considerations. This proactive approach may set a precedent for other states looking to navigate the complexities of AI in governance.
See also
AI Technology Enhances Road Safety in U.S. Cities
China Enforces New Rules Mandating Labeling of AI-Generated Content Starting Next Year
AI-Generated Video of Indian Army Official Criticizing Modi’s Policies Debunked as Fake
JobSphere Launches AI Career Assistant, Reducing Costs by 89% with Multilingual Support
Australia Mandates AI Training for 185,000 Public Servants to Enhance Service Delivery















































