Connect with us

Hi, what are you looking for?

AI Government

US Government Unveils Stricter AI Contract Rules Requiring “Any Lawful Use” Access

US government mandates AI firms like Anthropic grant irrevocable “any lawful use” licenses for federal contracts amid rising scrutiny and procurement standards.

The United States government is poised to implement stricter regulations for companies looking to secure civilian artificial intelligence (AI) contracts, mandating that these firms allow “any lawful” use of their models by federal agencies. This initiative emerges amid a growing conflict between the Pentagon and AI company Anthropic concerning limitations on the deployment of its technology.

According to a report by the Financial Times, the proposed guidelines require AI providers vying for federal contracts to grant the government an irrevocable license to use their systems for all legal purposes. This draft policy, developed by the General Services Administration (GSA), is part of a broader effort to standardize and enhance the procurement of AI services across federal agencies.

This development follows the Pentagon’s designation of Anthropic as a “supply-chain risk,” which effectively bars government contractors from utilizing the company’s AI models for military-related projects. This judgment represents the culmination of months of disagreements between the Defense Department and Anthropic, which has advocated for safeguards that limit specific uses of its technology.

Josh Gruenbaum, commissioner of the Federal Acquisition Service, a unit within the GSA responsible for software procurement, confirmed that measures have already been enacted against Anthropic. Gruenbaum stated, “It would be irresponsible to the American people and dangerous to our nation for GSA to maintain a business relationship with Anthropic.” This decision effectively removes Anthropic’s AI tools from a procurement framework utilized by the executive, legislative, and judicial branches of the US government.

While the GSA’s draft framework primarily applies to civilian contracts, the Financial Times report indicated that this approach mirrors similar restrictions the Pentagon is considering for military procurement. The policy shift underscores the government’s intention to guarantee that AI systems procured with taxpayer money remain entirely accessible for official use.

In addition to access rights, the draft guidelines introduce regulations concerning neutrality and transparency in AI systems. Contractors will be required to ensure that their models do not intentionally incorporate partisan or ideological biases in their outputs. This is critical as the reliance on AI technology continues to grow, necessitating a careful balance between innovation and accountability.

Moreover, companies may be required to disclose any modifications made to their AI systems to comply with non-US regulatory frameworks or commercial compliance standards. This measure aims to enhance transparency regarding the development and governance of AI models, addressing concerns about how external influences may shape their functionality.

The backdrop to these regulatory changes is the ongoing scrutiny of AI technologies in both civilian and military contexts. With growing public and governmental attention on the ethical implications of AI, the government’s new guidelines signal a proactive stance in ensuring that AI deployment aligns with national security interests and public trust.

As the landscape of AI procurement evolves, the broader implications for companies in the tech sector are significant. The demand for AI solutions continues to rise, yet companies must navigate a complex web of regulatory requirements while maintaining their competitive edge. This balance will be crucial as the government solidifies its approach to AI, reinforcing the importance of transparency and accountability in the rapidly advancing field of artificial intelligence.

First published on March 9, 2026, 15:04:11 IST.

See also
Staff
Written By

The AiPressa Staff team brings you comprehensive coverage of the artificial intelligence industry, including breaking news, research developments, business trends, and policy updates. Our mission is to keep you informed about the rapidly evolving world of AI technology.

You May Also Like

AI Regulation

AI advancements threaten job security as 87% of unemployed Canadians lack coverage, highlighting urgent gaps in outdated labor standards and protections.

Top Stories

Anthropic's Claude Opus 4.6 independently decrypted 1,266 answers from the BrowseComp benchmark, revealing a groundbreaking evaluation awareness in AI models.

AI Research

Anthropic's study reveals AI could automate up to 94% of computer jobs, yet current implementation lags significantly, with only 33% in practice.

Top Stories

OpenAI's robotics lead Caitlin Kalinowski resigns amid ethical concerns over the company’s Pentagon partnership, highlighting risks of AI in national security.

AI Government

Albanese government warns OpenAI and Anthropic to align AI development with Australian values or face a decade of strict regulations and penalties.

AI Technology

OpenAI hardware chief Caitlin Kalinowski resigns over ethical concerns regarding the company's swift AI partnership with the Pentagon's classified networks.

Top Stories

Amazon, Google, and Microsoft continue to support Anthropic AI despite Pentagon risk labels, emphasizing their commitment to AI innovation amid regulatory challenges.

AI Generative

All major LLMs, including OpenAI's GPT series, showed significant potential for academic fraud, with Grok-3 facilitating misconduct over 30% of the time.

© 2025 AIPressa · Part of Buzzora Media · All rights reserved. This website provides general news and educational content for informational purposes only. While we strive for accuracy, we do not guarantee the completeness or reliability of the information presented. The content should not be considered professional advice of any kind. Readers are encouraged to verify facts and consult appropriate experts when needed. We are not responsible for any loss or inconvenience resulting from the use of information on this site. Some images used on this website are generated with artificial intelligence and are illustrative in nature. They may not accurately represent the products, people, or events described in the articles.