Connect with us

Hi, what are you looking for?

AI Government

Anthropic Sues U.S. Government Over Defense Supply Chain Ban Amid AI Safety Dispute

Anthropic sues the U.S. government, claiming retaliation over its AI model Claude, after being labeled a national security risk for refusing military demands.

Anthropic filed a lawsuit against the U.S. government on March 6, escalating tensions surrounding its AI model, Claude. The Amazon-backed company claims the government retaliated after it refused to remove safety limits on Claude, a model that has become integral to military operations. The lawsuit also includes a challenge to a separate legal authority invoked by the government, filed in the U.S. Court of Appeals for the D.C. Circuit.

According to allegations made by Anthropic, the company has dedicated years to developing Claude into a leading frontier AI model for the government, including a specialized version for military applications called Claude Gov. The conflict reportedly began during negotiations over the Pentagon’s GenAI.mil platform in the fall of 2025, when the Department of Defense demanded that Anthropic allow Claude to be used for “all lawful uses,” abandoning its existing usage policy.

While Anthropic expressed a willingness to work with the military, it resisted two core demands: the use of Claude for lethal autonomous warfare without human oversight and for mass surveillance of American citizens. The company argued that Claude has not been tested for these uses and cannot perform them safely. Anthropic also offered to assist in transitioning the work to another provider if an agreement could not be reached.

The Pentagon’s narrative diverges sharply from Anthropic’s. The department’s chief technology officer indicated that tensions escalated following a U.S. raid in Venezuela, during which an Anthropic executive allegedly inquired whether Claude had been utilized in the operation. This account is not included in Anthropic’s lawsuit.

The situation intensified when Secretary of Defense Pete Hegseth met with Anthropic CEO Dario Amodei on February 24, presenting an ultimatum: comply with the Pentagon’s demands within four days or face repercussions such as compulsion under the Defense Production Act or expulsion from the defense supply chain as a “national security risk.” Amodei publicly rejected the demand on February 26.

Within hours, President Donald Trump posted a directive on Truth Social, ordering all federal agencies to cease using Anthropic’s technology immediately, branding the company as a “RADICAL LEFT, WOKE COMPANY.” Following this, Hegseth publicly classified Anthropic as a “Supply-Chain Risk to National Security,” leading to quick actions from various government agencies. The General Services Administration terminated Anthropic’s government-wide contract, while the Treasury, State, and Federal Housing Finance Agency also severed ties. Anthropic’s lawsuit alleges that the Pentagon initiated a major airstrike on Iran using its tools shortly after the ban was imposed.

In defense of its actions, the White House stated that it would not allow a company to compromise national security by dictating military operations. A spokesperson emphasized that U.S. forces would follow the Constitution and not the terms set by what they labeled a “woke AI company.”

Anthropic counters that the supply chain designation lacks factual basis, citing its FedRAMP authorization, active security clearances, and positive feedback from the government. At the meeting on February 24, Hegseth himself praised Claude’s capabilities, describing them as “exquisite.” Subsequent statements from two senior Pentagon officials indicated that there was “no evidence of supply-chain risk,” suggesting the designation was ideologically motivated.

The lawsuit raises five legal claims, arguing that the government’s actions violated the Administrative Procedure Act, the First Amendment, the Fifth Amendment, the president’s statutory authority, and prohibitions against unauthorized agency sanctions. As the case unfolds, the implications for both Anthropic and the broader landscape of AI regulation and military technology could prove significant, potentially reshaping the boundaries of collaboration between tech companies and government entities.

See also
Staff
Written By

The AiPressa Staff team brings you comprehensive coverage of the artificial intelligence industry, including breaking news, research developments, business trends, and policy updates. Our mission is to keep you informed about the rapidly evolving world of AI technology.

You May Also Like

Top Stories

Amazon's Zoox begins robotaxi testing in Phoenix and Dallas, marking a pivotal expansion as stock rises 0.13% to $213.49 amidst logistics shifts.

AI Regulation

AI advancements threaten job security as 87% of unemployed Canadians lack coverage, highlighting urgent gaps in outdated labor standards and protections.

AI Government

Hacker breaches Mexican government using AI chatbots Claude and ChatGPT, stealing 150GB of sensitive data, including records of 190 million taxpayers.

Top Stories

Anthropic's Claude Opus 4.6 independently decrypted 1,266 answers from the BrowseComp benchmark, revealing a groundbreaking evaluation awareness in AI models.

Top Stories

Ring CEO Jamie Siminoff faces intensified backlash over privacy concerns after the Super Bowl revealed the company's controversial facial recognition technology.

AI Research

Anthropic's study reveals AI could automate up to 94% of computer jobs, yet current implementation lags significantly, with only 33% in practice.

AI Government

US government mandates AI firms like Anthropic grant irrevocable “any lawful use” licenses for federal contracts amid rising scrutiny and procurement standards.

Top Stories

Amazon is poised for a 74% surge toward a $4 trillion market cap as AI innovations enhance profit margins, despite current underperformance in tech...

© 2025 AIPressa · Part of Buzzora Media · All rights reserved. This website provides general news and educational content for informational purposes only. While we strive for accuracy, we do not guarantee the completeness or reliability of the information presented. The content should not be considered professional advice of any kind. Readers are encouraged to verify facts and consult appropriate experts when needed. We are not responsible for any loss or inconvenience resulting from the use of information on this site. Some images used on this website are generated with artificial intelligence and are illustrative in nature. They may not accurately represent the products, people, or events described in the articles.