Connect with us

Hi, what are you looking for?

AI Government

GSA Proposes Draft AI Contract Terms Granting Broad Usage Rights to Federal Agencies

GSA proposes new AI contract terms, mandating irrevocable usage rights for federal agencies and neutrality in outputs, amid scrutiny of Anthropic’s Claude AI.

The General Services Administration (GSA) has proposed new terms and conditions for artificial intelligence (AI) systems, aiming to enhance the oversight and procurement rules for AI technologies sold to the federal government. This initiative is part of a broader conversation about how government entities acquire and deploy AI, with a draft guidance issued by GSA’s Federal Acquisition Service outlining contract provisions for AI models, services, and related tools acquired through federal channels. Comments on the draft are due by March 20.

This proposal emerges amid ongoing tensions surrounding the use of Anthropic’s Claude AI, which the Trump administration ordered federal agencies to cease utilizing due to restrictions imposed by the Department of War. The GSA has also terminated Anthropic’s OneGov agreement. Federal Acquisition Service Commissioner Josh Gruenbaum, a previous Wash100 awardee, stated that this decision effectively ends the company’s availability to the executive, legislative, and judicial branches through GSA’s pre-negotiated contracts, as reported by Reuters.

Under the proposed terms, contractors would be required to grant the U.S. government an irrevocable license to utilize AI systems delivered through federal contracts. This license would allow government agencies to use the technology for any lawful purpose, thereby preventing vendors from imposing contractual or technical restrictions on legitimate federal usage. This provision is intended to ensure that federal agencies maintain the flexibility to deploy AI capabilities across various missions and programs once they acquire the technology.

Another significant aspect of the proposal is the establishment of neutrality requirements for AI outputs. Contractors would be mandated to ensure that their systems do not embed partisan or ideological judgments in the generated outputs, thus producing objective responses in government contexts. This requirement reflects an emphasis on maintaining impartiality in AI-generated information, particularly in sensitive governmental applications.

The draft guidance also outlines transparency and disclosure requirements for AI vendors seeking federal contracts. Contractors will need to provide detailed information regarding model training methodologies, system limitations, and any modifications made to comply with regulatory frameworks outside the U.S. Additionally, the proposal calls for safeguards to protect government data, restricting vendors from utilizing federal data for model training without prior authorization. These measures are part of GSA’s efforts to reinforce oversight of AI technologies employed across federal agencies.

As AI technology continues to evolve, the GSA’s proposed framework seeks to address both the practical and ethical challenges associated with its deployment in government settings. The upcoming 2026 Artificial Intelligence Summit, scheduled for March 18, will gather experts to discuss the changing landscape of AI and the implications of such procurement policies.

In parallel, the Senate recently confirmed Lt. Gen. Joshua Rudd as the new director of the National Security Agency and commander of U.S. Cyber Command in a 71-29 vote. This leadership transition underscores the increasing emphasis on cybersecurity strategy and national defense priorities, highlighting the necessity for collaborative forums to address emerging cyberthreats. As government and industry leaders navigate these challenges, discussions at events like the Cyber Summit will continue to be critical for shaping effective responses.

In a related development, Mangala Kuppa has been appointed as the chief information officer of the Department of Labor, further illustrating the government’s commitment to technological modernization and the adoption of emerging technologies, including AI. With over 25 years of experience in both public and private sectors, Kuppa has a track record of leading complex technology initiatives and enhancing cybersecurity resilience.

As the federal government refines its approach to AI procurement and usage, the proposed regulations could significantly influence how agencies interact with AI technologies going forward. Ensuring transparency, safeguarding data, and maintaining neutrality in AI outputs will be paramount as the government seeks to leverage AI responsibly in fulfilling its missions.

See also
Staff
Written By

The AiPressa Staff team brings you comprehensive coverage of the artificial intelligence industry, including breaking news, research developments, business trends, and policy updates. Our mission is to keep you informed about the rapidly evolving world of AI technology.

You May Also Like

AI Regulation

OpenAI partners with the U.S. military, implementing strict safeguards against AI surveillance, while Anthropic's Claude faces ethical scrutiny over misuse concerns.

AI Tools

Microsoft reveals its Microsoft 365 E7 plan, integrating Copilot Cowork and Anthropic's Claude Cowork, with a $15 per user price and 160% YoY user...

Top Stories

Microsoft launches Copilot Cowork, integrating Anthropic's Claude AI for $99/month/user, aiming to enhance productivity amid growing AI concerns.

AI Government

Anthropic sues the U.S. government over being labeled a national security risk, as OpenAI secures a Defense Department contract amid growing tensions.

AI Government

Over 30 OpenAI and Google DeepMind employees, including chief scientist Jeff Dean, back Anthropic’s legal battle against the Pentagon's blacklist, warning of industry-wide repercussions.

Top Stories

FIRE challenges the Pentagon's First Amendment violation against Anthropic, claiming its designation as a supply chain risk threatens ethical AI governance and innovation.

AI Tools

Anthropic's Claude Opus 4.6 identifies security vulnerabilities in Firefox's codebase 300% faster than human analysts, while cURL faces a surge of low-quality AI-generated reports.

AI Regulation

Pentagon bans Anthropic as a defense contractor over AI ethics rules, prompting CEO Dario Amodei to announce plans for a legal challenge against the...

© 2025 AIPressa · Part of Buzzora Media · All rights reserved. This website provides general news and educational content for informational purposes only. While we strive for accuracy, we do not guarantee the completeness or reliability of the information presented. The content should not be considered professional advice of any kind. Readers are encouraged to verify facts and consult appropriate experts when needed. We are not responsible for any loss or inconvenience resulting from the use of information on this site. Some images used on this website are generated with artificial intelligence and are illustrative in nature. They may not accurately represent the products, people, or events described in the articles.