Connect with us

Hi, what are you looking for?

Top Stories

US Implements Strict AI Contract Guidelines, Limiting Anthropic’s Military Role

Trump administration enforces strict AI contract rules, barring Anthropic from military projects and mandating irrevocable licenses for lawful use of models.

March 6 (Reuters) – The Trump administration has introduced stringent regulations for civilian artificial intelligence contracts, compelling companies to permit “any lawful” usage of their models. This move comes amid a standoff between the Pentagon and the AI firm Anthropic, as reported by the Financial Times on Friday.

On Thursday, the Pentagon classified Anthropic as a “supply-chain risk,” effectively prohibiting government contractors from utilizing the company’s technology for U.S. military projects. This decision followed a prolonged dispute over the firm’s insistence on implementing safeguards that the Defense Department deemed excessive.

A draft of the new guidelines reviewed by the FT stipulates that AI companies seeking contracts with the government must grant the U.S. an irrevocable license to use their systems for all lawful purposes. This guidance from the General Services Administration (GSA) applies to civilian contracts and is part of a broader initiative aimed at strengthening AI services procurement across the government. The report suggests that similar measures are under consideration for military contracts.

“It would be irresponsible to the American people and dangerous to our nation for GSA to maintain a business relationship with Anthropic,” said Josh Gruenbaum, commissioner of the Federal Acquisition Service, a GSA subsidiary focused on federal software procurement, in a statement to Reuters via email. He further noted that as directed by the President, the GSA has terminated Anthropic’s OneGov deal, thus ending its availability to the Executive, Legislative, and Judicial branches through pre-negotiated contracts.

The White House has yet to respond to requests for comment regarding the matter. The GSA draft also mandates that contractors “must not intentionally encode partisan or ideological judgments into the AI systems data outputs,” a guideline aimed at ensuring neutrality in AI applications.

Moreover, companies are required to disclose whether their models have been modified or configured to comply with any non-U.S. federal government or commercial compliance or regulatory frameworks. This requirement reflects a growing concern about the potential implications of AI technologies and their alignment with existing regulatory standards.

The escalating friction between the Pentagon and Anthropic highlights the broader challenges facing the U.S. government as it seeks to integrate advanced AI technologies into its operations. The conflict underscores the complexities associated with ensuring both innovation and security in military and civilian applications of AI.

As the landscape of artificial intelligence continues to evolve, the government’s stringent guidelines may serve as a blueprint for future interactions between technology companies and federal agencies. The industry is likely to adapt in response to these regulatory pressures, shaping the way AI technologies are developed and deployed in the public sector.

See also
Staff
Written By

The AiPressa Staff team brings you comprehensive coverage of the artificial intelligence industry, including breaking news, research developments, business trends, and policy updates. Our mission is to keep you informed about the rapidly evolving world of AI technology.

You May Also Like

Top Stories

Microsoft defends Anthropic's Claude AI amid a Pentagon blacklist, ensuring integration into enterprise tools for 29% of the market, potentially affecting $26B revenue by...

AI Government

Microsoft continues to support Anthropic's Claude models amid its Pentagon security risk designation, ensuring Azure clients retain access to vital AI technology.

Top Stories

Microsoft confirms Anthropic's AI products will remain available despite security risks, prioritizing enhanced security measures to safeguard technologies.

Top Stories

U.S. officials propose AI chip export controls that could severely impact Nvidia's $17B sales to China, threatening growth in the AI sector.

AI Technology

Trump administration proposes export controls on Nvidia and AMD's AI chips, risking $17B in sales and threatening the booming AI market's growth trajectory

AI Generative

Luma unveils Luma Agents, an AI platform utilizing Unified Intelligence to autonomously streamline multimodal creative workflows, targeting competition with OpenAI and Anthropic.

Top Stories

OPM halts use of Anthropic's Claude amid safety concerns, replacing it with Grok and Codex, expected to debut in Q1 2026.

AI Technology

Nvidia halts H200 AI chip production for China amid potential U.S. export caps limiting sales to 75,000 units, signaling uncertainty in the market.

© 2025 AIPressa · Part of Buzzora Media · All rights reserved. This website provides general news and educational content for informational purposes only. While we strive for accuracy, we do not guarantee the completeness or reliability of the information presented. The content should not be considered professional advice of any kind. Readers are encouraged to verify facts and consult appropriate experts when needed. We are not responsible for any loss or inconvenience resulting from the use of information on this site. Some images used on this website are generated with artificial intelligence and are illustrative in nature. They may not accurately represent the products, people, or events described in the articles.