Connect with us

Hi, what are you looking for?

AI Government

Anthropic Sues Pentagon Over AI Blacklist, Claims Unlawful Designation Threatens Business

Anthropic sues the Pentagon over a national security designation that could cost the company $2 billion by 2026, challenging its implications for AI governance.

Anthropic has initiated a lawsuit against the United States Department of Defense in a bid to challenge a recent government designation that labels the company a national security supply-chain risk. This legal action, filed in a federal court in California, intensifies an ongoing dispute over the application of artificial intelligence in military operations.

The lawsuit contends that the government’s classification is unlawful and infringes upon the company’s constitutional rights, including freedom of speech and due process. Anthropic is seeking to have the designation overturned and to prevent federal agencies from enforcing restrictions related to it. In its filing, the company stated, “These actions are unprecedented and unlawful. The Constitution does not allow the government to wield its enormous power to punish a company for its protected speech.”

The tensions escalated after the Pentagon formally designated Anthropic as a supply-chain risk when the company declined to remove certain safeguards embedded in its AI systems. According to officials, this decision was authorized by US Defense Secretary Pete Hegseth, following Anthropic’s refusal to eliminate restrictions on the use of its AI models in fully autonomous weapons or for domestic surveillance of Americans.

This conflict arose after months of negotiations regarding the deployment of Anthropic’s AI tools in defense projects. Soon after the designation, US President Donald Trump posted on social media urging government agencies to cease utilizing Anthropic’s AI model, Claude. There are also indications that the White House may be contemplating an executive order to remove the company’s technology from federal systems.

The legal challenge underscores a broader struggle over the governance of artificial intelligence technologies, as questions emerge over whether decisions on their use should lie with government authorities or the companies that develop them. Anthropic, which has previously collaborated closely with US national security agencies, does not inherently oppose military applications of AI. However, CEO Dario Amodei argues that current AI models lack the reliability needed for fully autonomous weapons systems and should not be employed for domestic surveillance.

The Pentagon has asserted that national security decisions must adhere to US law rather than corporate policies, insisting on retaining the flexibility to deploy AI for “any lawful use.” Anthropic executives have warned that the blacklist could have severe repercussions for their business, particularly concerning government and enterprise contracts.

The company projected that the designation could potentially reduce its revenue by several billion dollars by 2026 and harm its reputation among corporate clients. In court filings, Anthropic noted that one of its partners, previously engaged in a multi-million-dollar contract, has already shifted from using Claude to another generative AI model, resulting in the loss of an expected pipeline exceeding $100 million. Additionally, negotiations with financial institutions, valued at around $180 million, have reportedly stalled.

The controversy surrounding the Pentagon’s designation has garnered attention from across the technology sector. A coalition of 37 AI researchers and engineers from OpenAI and Google submitted a legal brief in support of Anthropic, articulating concerns that government actions could stifle open discourse on the risks and benefits associated with artificial intelligence. Jeff Dean, one of the signatories, warned that restrictions on debate could ultimately hinder innovation in the field.

The Pentagon’s designation and Anthropic’s legal challenge could set a significant precedent for the AI industry, especially as companies increasingly collaborate with governments on defense and security technologies. Over recent years, the Defense Department has entered into agreements worth as much as $200 million each with several AI firms, including Anthropic, OpenAI, and Google.

While discussions between Anthropic and the government are currently stalled, the company has indicated that the lawsuit does not preclude future negotiations aimed at resolving this dispute. The outcome of this case may impact how AI developers approach limitations on military applications of their technology and the extent of government influence over private AI systems employed in national security.

First published on March 10, 2026, 15:01:08 IST.

See also
Staff
Written By

The AiPressa Staff team brings you comprehensive coverage of the artificial intelligence industry, including breaking news, research developments, business trends, and policy updates. Our mission is to keep you informed about the rapidly evolving world of AI technology.

You May Also Like

Top Stories

Over 30 OpenAI and Google DeepMind employees support Anthropic's lawsuit against the DOD, risking national security and AI ethics amid technology misuse concerns.

AI Regulation

Anthropic sues the Pentagon for $1 billion, alleging First Amendment violations and retaliation after being labeled a supply chain risk for its AI safety...

AI Government

Anthropic sues the U.S. government, claiming retaliation over its AI model Claude, after being labeled a national security risk for refusing military demands.

AI Regulation

AI advancements threaten job security as 87% of unemployed Canadians lack coverage, highlighting urgent gaps in outdated labor standards and protections.

Top Stories

Anthropic's Claude Opus 4.6 independently decrypted 1,266 answers from the BrowseComp benchmark, revealing a groundbreaking evaluation awareness in AI models.

AI Research

Anthropic's study reveals AI could automate up to 94% of computer jobs, yet current implementation lags significantly, with only 33% in practice.

AI Government

US government mandates AI firms like Anthropic grant irrevocable “any lawful use” licenses for federal contracts amid rising scrutiny and procurement standards.

Top Stories

OpenAI's robotics lead Caitlin Kalinowski resigns amid ethical concerns over the company’s Pentagon partnership, highlighting risks of AI in national security.

© 2025 AIPressa · Part of Buzzora Media · All rights reserved. This website provides general news and educational content for informational purposes only. While we strive for accuracy, we do not guarantee the completeness or reliability of the information presented. The content should not be considered professional advice of any kind. Readers are encouraged to verify facts and consult appropriate experts when needed. We are not responsible for any loss or inconvenience resulting from the use of information on this site. Some images used on this website are generated with artificial intelligence and are illustrative in nature. They may not accurately represent the products, people, or events described in the articles.