Anthropic has initiated a lawsuit against the United States Department of Defense in a bid to challenge a recent government designation that labels the company a national security supply-chain risk. This legal action, filed in a federal court in California, intensifies an ongoing dispute over the application of artificial intelligence in military operations.
The lawsuit contends that the government’s classification is unlawful and infringes upon the company’s constitutional rights, including freedom of speech and due process. Anthropic is seeking to have the designation overturned and to prevent federal agencies from enforcing restrictions related to it. In its filing, the company stated, “These actions are unprecedented and unlawful. The Constitution does not allow the government to wield its enormous power to punish a company for its protected speech.”
The tensions escalated after the Pentagon formally designated Anthropic as a supply-chain risk when the company declined to remove certain safeguards embedded in its AI systems. According to officials, this decision was authorized by US Defense Secretary Pete Hegseth, following Anthropic’s refusal to eliminate restrictions on the use of its AI models in fully autonomous weapons or for domestic surveillance of Americans.
This conflict arose after months of negotiations regarding the deployment of Anthropic’s AI tools in defense projects. Soon after the designation, US President Donald Trump posted on social media urging government agencies to cease utilizing Anthropic’s AI model, Claude. There are also indications that the White House may be contemplating an executive order to remove the company’s technology from federal systems.
The legal challenge underscores a broader struggle over the governance of artificial intelligence technologies, as questions emerge over whether decisions on their use should lie with government authorities or the companies that develop them. Anthropic, which has previously collaborated closely with US national security agencies, does not inherently oppose military applications of AI. However, CEO Dario Amodei argues that current AI models lack the reliability needed for fully autonomous weapons systems and should not be employed for domestic surveillance.
The Pentagon has asserted that national security decisions must adhere to US law rather than corporate policies, insisting on retaining the flexibility to deploy AI for “any lawful use.” Anthropic executives have warned that the blacklist could have severe repercussions for their business, particularly concerning government and enterprise contracts.
The company projected that the designation could potentially reduce its revenue by several billion dollars by 2026 and harm its reputation among corporate clients. In court filings, Anthropic noted that one of its partners, previously engaged in a multi-million-dollar contract, has already shifted from using Claude to another generative AI model, resulting in the loss of an expected pipeline exceeding $100 million. Additionally, negotiations with financial institutions, valued at around $180 million, have reportedly stalled.
The controversy surrounding the Pentagon’s designation has garnered attention from across the technology sector. A coalition of 37 AI researchers and engineers from OpenAI and Google submitted a legal brief in support of Anthropic, articulating concerns that government actions could stifle open discourse on the risks and benefits associated with artificial intelligence. Jeff Dean, one of the signatories, warned that restrictions on debate could ultimately hinder innovation in the field.
The Pentagon’s designation and Anthropic’s legal challenge could set a significant precedent for the AI industry, especially as companies increasingly collaborate with governments on defense and security technologies. Over recent years, the Defense Department has entered into agreements worth as much as $200 million each with several AI firms, including Anthropic, OpenAI, and Google.
While discussions between Anthropic and the government are currently stalled, the company has indicated that the lawsuit does not preclude future negotiations aimed at resolving this dispute. The outcome of this case may impact how AI developers approach limitations on military applications of their technology and the extent of government influence over private AI systems employed in national security.
First published on March 10, 2026, 15:01:08 IST.
See also
AI Technology Enhances Road Safety in U.S. Cities
China Enforces New Rules Mandating Labeling of AI-Generated Content Starting Next Year
AI-Generated Video of Indian Army Official Criticizing Modi’s Policies Debunked as Fake
JobSphere Launches AI Career Assistant, Reducing Costs by 89% with Multilingual Support
Australia Mandates AI Training for 185,000 Public Servants to Enhance Service Delivery





















































