Anthropic filed a lawsuit against the U.S. government on March 6, escalating tensions surrounding its AI model, Claude. The Amazon-backed company claims the government retaliated after it refused to remove safety limits on Claude, a model that has become integral to military operations. The lawsuit also includes a challenge to a separate legal authority invoked by the government, filed in the U.S. Court of Appeals for the D.C. Circuit.
According to allegations made by Anthropic, the company has dedicated years to developing Claude into a leading frontier AI model for the government, including a specialized version for military applications called Claude Gov. The conflict reportedly began during negotiations over the Pentagon’s GenAI.mil platform in the fall of 2025, when the Department of Defense demanded that Anthropic allow Claude to be used for “all lawful uses,” abandoning its existing usage policy.
While Anthropic expressed a willingness to work with the military, it resisted two core demands: the use of Claude for lethal autonomous warfare without human oversight and for mass surveillance of American citizens. The company argued that Claude has not been tested for these uses and cannot perform them safely. Anthropic also offered to assist in transitioning the work to another provider if an agreement could not be reached.
The Pentagon’s narrative diverges sharply from Anthropic’s. The department’s chief technology officer indicated that tensions escalated following a U.S. raid in Venezuela, during which an Anthropic executive allegedly inquired whether Claude had been utilized in the operation. This account is not included in Anthropic’s lawsuit.
The situation intensified when Secretary of Defense Pete Hegseth met with Anthropic CEO Dario Amodei on February 24, presenting an ultimatum: comply with the Pentagon’s demands within four days or face repercussions such as compulsion under the Defense Production Act or expulsion from the defense supply chain as a “national security risk.” Amodei publicly rejected the demand on February 26.
Within hours, President Donald Trump posted a directive on Truth Social, ordering all federal agencies to cease using Anthropic’s technology immediately, branding the company as a “RADICAL LEFT, WOKE COMPANY.” Following this, Hegseth publicly classified Anthropic as a “Supply-Chain Risk to National Security,” leading to quick actions from various government agencies. The General Services Administration terminated Anthropic’s government-wide contract, while the Treasury, State, and Federal Housing Finance Agency also severed ties. Anthropic’s lawsuit alleges that the Pentagon initiated a major airstrike on Iran using its tools shortly after the ban was imposed.
In defense of its actions, the White House stated that it would not allow a company to compromise national security by dictating military operations. A spokesperson emphasized that U.S. forces would follow the Constitution and not the terms set by what they labeled a “woke AI company.”
Anthropic counters that the supply chain designation lacks factual basis, citing its FedRAMP authorization, active security clearances, and positive feedback from the government. At the meeting on February 24, Hegseth himself praised Claude’s capabilities, describing them as “exquisite.” Subsequent statements from two senior Pentagon officials indicated that there was “no evidence of supply-chain risk,” suggesting the designation was ideologically motivated.
The lawsuit raises five legal claims, arguing that the government’s actions violated the Administrative Procedure Act, the First Amendment, the Fifth Amendment, the president’s statutory authority, and prohibitions against unauthorized agency sanctions. As the case unfolds, the implications for both Anthropic and the broader landscape of AI regulation and military technology could prove significant, potentially reshaping the boundaries of collaboration between tech companies and government entities.
See also
AI Technology Enhances Road Safety in U.S. Cities
China Enforces New Rules Mandating Labeling of AI-Generated Content Starting Next Year
AI-Generated Video of Indian Army Official Criticizing Modi’s Policies Debunked as Fake
JobSphere Launches AI Career Assistant, Reducing Costs by 89% with Multilingual Support
Australia Mandates AI Training for 185,000 Public Servants to Enhance Service Delivery



















































