Connect with us

Hi, what are you looking for?

AI Government

Anthropic Sues U.S. Government Over Defense Supply Chain Ban Amid AI Safety Dispute

Anthropic sues the U.S. government, claiming retaliation over its AI model Claude, after being labeled a national security risk for refusing military demands.

Anthropic filed a lawsuit against the U.S. government on March 6, escalating tensions surrounding its AI model, Claude. The Amazon-backed company claims the government retaliated after it refused to remove safety limits on Claude, a model that has become integral to military operations. The lawsuit also includes a challenge to a separate legal authority invoked by the government, filed in the U.S. Court of Appeals for the D.C. Circuit.

According to allegations made by Anthropic, the company has dedicated years to developing Claude into a leading frontier AI model for the government, including a specialized version for military applications called Claude Gov. The conflict reportedly began during negotiations over the Pentagon’s GenAI.mil platform in the fall of 2025, when the Department of Defense demanded that Anthropic allow Claude to be used for “all lawful uses,” abandoning its existing usage policy.

While Anthropic expressed a willingness to work with the military, it resisted two core demands: the use of Claude for lethal autonomous warfare without human oversight and for mass surveillance of American citizens. The company argued that Claude has not been tested for these uses and cannot perform them safely. Anthropic also offered to assist in transitioning the work to another provider if an agreement could not be reached.

The Pentagon’s narrative diverges sharply from Anthropic’s. The department’s chief technology officer indicated that tensions escalated following a U.S. raid in Venezuela, during which an Anthropic executive allegedly inquired whether Claude had been utilized in the operation. This account is not included in Anthropic’s lawsuit.

The situation intensified when Secretary of Defense Pete Hegseth met with Anthropic CEO Dario Amodei on February 24, presenting an ultimatum: comply with the Pentagon’s demands within four days or face repercussions such as compulsion under the Defense Production Act or expulsion from the defense supply chain as a “national security risk.” Amodei publicly rejected the demand on February 26.

Within hours, President Donald Trump posted a directive on Truth Social, ordering all federal agencies to cease using Anthropic’s technology immediately, branding the company as a “RADICAL LEFT, WOKE COMPANY.” Following this, Hegseth publicly classified Anthropic as a “Supply-Chain Risk to National Security,” leading to quick actions from various government agencies. The General Services Administration terminated Anthropic’s government-wide contract, while the Treasury, State, and Federal Housing Finance Agency also severed ties. Anthropic’s lawsuit alleges that the Pentagon initiated a major airstrike on Iran using its tools shortly after the ban was imposed.

In defense of its actions, the White House stated that it would not allow a company to compromise national security by dictating military operations. A spokesperson emphasized that U.S. forces would follow the Constitution and not the terms set by what they labeled a “woke AI company.”

Anthropic counters that the supply chain designation lacks factual basis, citing its FedRAMP authorization, active security clearances, and positive feedback from the government. At the meeting on February 24, Hegseth himself praised Claude’s capabilities, describing them as “exquisite.” Subsequent statements from two senior Pentagon officials indicated that there was “no evidence of supply-chain risk,” suggesting the designation was ideologically motivated.

The lawsuit raises five legal claims, arguing that the government’s actions violated the Administrative Procedure Act, the First Amendment, the Fifth Amendment, the president’s statutory authority, and prohibitions against unauthorized agency sanctions. As the case unfolds, the implications for both Anthropic and the broader landscape of AI regulation and military technology could prove significant, potentially reshaping the boundaries of collaboration between tech companies and government entities.

See also
Staff
Written By

The AiPressa Staff team brings you comprehensive coverage of the artificial intelligence industry, including breaking news, research developments, business trends, and policy updates. Our mission is to keep you informed about the rapidly evolving world of AI technology.

You May Also Like

Top Stories

Hugging Face launches ML Intern, an open-source AI agent that surpasses Claude Code in scientific reasoning with a 32% GPQA score, offering $1,000 in...

AI Generative

OpenAI unveils GPT-5.5 for paid subscribers, enhancing efficiency and accuracy with a 900 million weekly user base, just six weeks after GPT-5.4.

Top Stories

Amazon and Walmart race to dominate the AI-driven retail decision layer, as nearly 70% of consumers prefer AI agents to simplify their shopping experience.

AI Technology

Indian Finance Minister Nirmala Sitharaman met with bank leaders to address AI risks, following Anthropic's alarming claims about its Claude Mythos model's cybersecurity threats.

AI Cybersecurity

Anthropic's Mythos can autonomously exploit vulnerabilities and execute cyberattacks, raising urgent questions about AI governance and cybersecurity resilience.

Top Stories

OpenAI, valued at $852 billion, eyes a 2026 IPO as revenue soars 225% to $13 billion, presenting investment opportunities via Ark Venture Fund and...

AI Technology

OpenAI's Sam Altman proposes a new AI regulatory framework as the White House blacklists Anthropic over failed contract negotiations, signaling rising tensions.

AI Cybersecurity

South Korea's intelligence warns that Anthropic's AI "Mythos" can autonomously execute cyberattacks, posing a severe risk to critical infrastructure by 2026.

© 2025 AIPressa · Part of Buzzora Media · All rights reserved. This website provides general news and educational content for informational purposes only. While we strive for accuracy, we do not guarantee the completeness or reliability of the information presented. The content should not be considered professional advice of any kind. Readers are encouraged to verify facts and consult appropriate experts when needed. We are not responsible for any loss or inconvenience resulting from the use of information on this site. Some images used on this website are generated with artificial intelligence and are illustrative in nature. They may not accurately represent the products, people, or events described in the articles.