March 6 (Reuters) – The Trump administration has introduced stringent regulations for civilian artificial intelligence contracts, compelling companies to permit “any lawful” usage of their models. This move comes amid a standoff between the Pentagon and the AI firm Anthropic, as reported by the Financial Times on Friday.
On Thursday, the Pentagon classified Anthropic as a “supply-chain risk,” effectively prohibiting government contractors from utilizing the company’s technology for U.S. military projects. This decision followed a prolonged dispute over the firm’s insistence on implementing safeguards that the Defense Department deemed excessive.
A draft of the new guidelines reviewed by the FT stipulates that AI companies seeking contracts with the government must grant the U.S. an irrevocable license to use their systems for all lawful purposes. This guidance from the General Services Administration (GSA) applies to civilian contracts and is part of a broader initiative aimed at strengthening AI services procurement across the government. The report suggests that similar measures are under consideration for military contracts.
“It would be irresponsible to the American people and dangerous to our nation for GSA to maintain a business relationship with Anthropic,” said Josh Gruenbaum, commissioner of the Federal Acquisition Service, a GSA subsidiary focused on federal software procurement, in a statement to Reuters via email. He further noted that as directed by the President, the GSA has terminated Anthropic’s OneGov deal, thus ending its availability to the Executive, Legislative, and Judicial branches through pre-negotiated contracts.
The White House has yet to respond to requests for comment regarding the matter. The GSA draft also mandates that contractors “must not intentionally encode partisan or ideological judgments into the AI systems data outputs,” a guideline aimed at ensuring neutrality in AI applications.
Moreover, companies are required to disclose whether their models have been modified or configured to comply with any non-U.S. federal government or commercial compliance or regulatory frameworks. This requirement reflects a growing concern about the potential implications of AI technologies and their alignment with existing regulatory standards.
The escalating friction between the Pentagon and Anthropic highlights the broader challenges facing the U.S. government as it seeks to integrate advanced AI technologies into its operations. The conflict underscores the complexities associated with ensuring both innovation and security in military and civilian applications of AI.
As the landscape of artificial intelligence continues to evolve, the government’s stringent guidelines may serve as a blueprint for future interactions between technology companies and federal agencies. The industry is likely to adapt in response to these regulatory pressures, shaping the way AI technologies are developed and deployed in the public sector.
See also
Hugging Face Transforms AI Development with Open-Source Models and Collaborative Hub
Germany”s National Team Prepares for World Cup Qualifiers with Disco Atmosphere
95% of AI Projects Fail in Companies According to MIT
AI in Food & Beverages Market to Surge from $11.08B to $263.80B by 2032
Satya Nadella Supports OpenAI’s $100B Revenue Goal, Highlights AI Funding Needs




















































