The General Services Administration (GSA) has proposed new terms and conditions for artificial intelligence (AI) systems, aiming to enhance the oversight and procurement rules for AI technologies sold to the federal government. This initiative is part of a broader conversation about how government entities acquire and deploy AI, with a draft guidance issued by GSA’s Federal Acquisition Service outlining contract provisions for AI models, services, and related tools acquired through federal channels. Comments on the draft are due by March 20.
This proposal emerges amid ongoing tensions surrounding the use of Anthropic’s Claude AI, which the Trump administration ordered federal agencies to cease utilizing due to restrictions imposed by the Department of War. The GSA has also terminated Anthropic’s OneGov agreement. Federal Acquisition Service Commissioner Josh Gruenbaum, a previous Wash100 awardee, stated that this decision effectively ends the company’s availability to the executive, legislative, and judicial branches through GSA’s pre-negotiated contracts, as reported by Reuters.
Under the proposed terms, contractors would be required to grant the U.S. government an irrevocable license to utilize AI systems delivered through federal contracts. This license would allow government agencies to use the technology for any lawful purpose, thereby preventing vendors from imposing contractual or technical restrictions on legitimate federal usage. This provision is intended to ensure that federal agencies maintain the flexibility to deploy AI capabilities across various missions and programs once they acquire the technology.
Another significant aspect of the proposal is the establishment of neutrality requirements for AI outputs. Contractors would be mandated to ensure that their systems do not embed partisan or ideological judgments in the generated outputs, thus producing objective responses in government contexts. This requirement reflects an emphasis on maintaining impartiality in AI-generated information, particularly in sensitive governmental applications.
The draft guidance also outlines transparency and disclosure requirements for AI vendors seeking federal contracts. Contractors will need to provide detailed information regarding model training methodologies, system limitations, and any modifications made to comply with regulatory frameworks outside the U.S. Additionally, the proposal calls for safeguards to protect government data, restricting vendors from utilizing federal data for model training without prior authorization. These measures are part of GSA’s efforts to reinforce oversight of AI technologies employed across federal agencies.
As AI technology continues to evolve, the GSA’s proposed framework seeks to address both the practical and ethical challenges associated with its deployment in government settings. The upcoming 2026 Artificial Intelligence Summit, scheduled for March 18, will gather experts to discuss the changing landscape of AI and the implications of such procurement policies.
In parallel, the Senate recently confirmed Lt. Gen. Joshua Rudd as the new director of the National Security Agency and commander of U.S. Cyber Command in a 71-29 vote. This leadership transition underscores the increasing emphasis on cybersecurity strategy and national defense priorities, highlighting the necessity for collaborative forums to address emerging cyberthreats. As government and industry leaders navigate these challenges, discussions at events like the Cyber Summit will continue to be critical for shaping effective responses.
In a related development, Mangala Kuppa has been appointed as the chief information officer of the Department of Labor, further illustrating the government’s commitment to technological modernization and the adoption of emerging technologies, including AI. With over 25 years of experience in both public and private sectors, Kuppa has a track record of leading complex technology initiatives and enhancing cybersecurity resilience.
As the federal government refines its approach to AI procurement and usage, the proposed regulations could significantly influence how agencies interact with AI technologies going forward. Ensuring transparency, safeguarding data, and maintaining neutrality in AI outputs will be paramount as the government seeks to leverage AI responsibly in fulfilling its missions.
See also
AI Technology Enhances Road Safety in U.S. Cities
China Enforces New Rules Mandating Labeling of AI-Generated Content Starting Next Year
AI-Generated Video of Indian Army Official Criticizing Modi’s Policies Debunked as Fake
JobSphere Launches AI Career Assistant, Reducing Costs by 89% with Multilingual Support
Australia Mandates AI Training for 185,000 Public Servants to Enhance Service Delivery


















































