Connect with us

Hi, what are you looking for?

AI Government

Anthropic Sues Pentagon Over AI Blacklist, Claims Unlawful Designation Threatens Business

Anthropic sues the Pentagon over a national security designation that could cost the company $2 billion by 2026, challenging its implications for AI governance.

Anthropic has initiated a lawsuit against the United States Department of Defense in a bid to challenge a recent government designation that labels the company a national security supply-chain risk. This legal action, filed in a federal court in California, intensifies an ongoing dispute over the application of artificial intelligence in military operations.

The lawsuit contends that the government’s classification is unlawful and infringes upon the company’s constitutional rights, including freedom of speech and due process. Anthropic is seeking to have the designation overturned and to prevent federal agencies from enforcing restrictions related to it. In its filing, the company stated, “These actions are unprecedented and unlawful. The Constitution does not allow the government to wield its enormous power to punish a company for its protected speech.”

The tensions escalated after the Pentagon formally designated Anthropic as a supply-chain risk when the company declined to remove certain safeguards embedded in its AI systems. According to officials, this decision was authorized by US Defense Secretary Pete Hegseth, following Anthropic’s refusal to eliminate restrictions on the use of its AI models in fully autonomous weapons or for domestic surveillance of Americans.

This conflict arose after months of negotiations regarding the deployment of Anthropic’s AI tools in defense projects. Soon after the designation, US President Donald Trump posted on social media urging government agencies to cease utilizing Anthropic’s AI model, Claude. There are also indications that the White House may be contemplating an executive order to remove the company’s technology from federal systems.

The legal challenge underscores a broader struggle over the governance of artificial intelligence technologies, as questions emerge over whether decisions on their use should lie with government authorities or the companies that develop them. Anthropic, which has previously collaborated closely with US national security agencies, does not inherently oppose military applications of AI. However, CEO Dario Amodei argues that current AI models lack the reliability needed for fully autonomous weapons systems and should not be employed for domestic surveillance.

The Pentagon has asserted that national security decisions must adhere to US law rather than corporate policies, insisting on retaining the flexibility to deploy AI for “any lawful use.” Anthropic executives have warned that the blacklist could have severe repercussions for their business, particularly concerning government and enterprise contracts.

The company projected that the designation could potentially reduce its revenue by several billion dollars by 2026 and harm its reputation among corporate clients. In court filings, Anthropic noted that one of its partners, previously engaged in a multi-million-dollar contract, has already shifted from using Claude to another generative AI model, resulting in the loss of an expected pipeline exceeding $100 million. Additionally, negotiations with financial institutions, valued at around $180 million, have reportedly stalled.

The controversy surrounding the Pentagon’s designation has garnered attention from across the technology sector. A coalition of 37 AI researchers and engineers from OpenAI and Google submitted a legal brief in support of Anthropic, articulating concerns that government actions could stifle open discourse on the risks and benefits associated with artificial intelligence. Jeff Dean, one of the signatories, warned that restrictions on debate could ultimately hinder innovation in the field.

The Pentagon’s designation and Anthropic’s legal challenge could set a significant precedent for the AI industry, especially as companies increasingly collaborate with governments on defense and security technologies. Over recent years, the Defense Department has entered into agreements worth as much as $200 million each with several AI firms, including Anthropic, OpenAI, and Google.

While discussions between Anthropic and the government are currently stalled, the company has indicated that the lawsuit does not preclude future negotiations aimed at resolving this dispute. The outcome of this case may impact how AI developers approach limitations on military applications of their technology and the extent of government influence over private AI systems employed in national security.

First published on March 10, 2026, 15:01:08 IST.

See also
Staff
Written By

The AiPressa Staff team brings you comprehensive coverage of the artificial intelligence industry, including breaking news, research developments, business trends, and policy updates. Our mission is to keep you informed about the rapidly evolving world of AI technology.

You May Also Like

AI Regulation

NVIDIA announces a $40 billion diversified AI foundation portfolio, strategically addressing missed investments in OpenAI and Anthropic while boosting shares by 98.8%.

Top Stories

OpenAI acquires personal finance startup Hiro and media company TBPN to bolster talent and improve public image amid fierce competition from Anthropic.

AI Regulation

Anthropic's Claude Mythos launches with minimal EU regulatory input, raising alarms as concerns grow over unregulated AI amid a $300M pro-AI campaign in the...

AI Cybersecurity

Barclays CEO warns that Anthropic's Mythos AI, scoring 93.9% on SWE-bench, poses unprecedented cybersecurity risks for global banks.

AI Tools

JPMorgan CEO Jamie Dimon warns that Anthropic's AI tool Mythos exposes thousands of vulnerabilities, escalating cybersecurity risks for financial institutions.

AI Cybersecurity

OpenAI's Industrial Policy warns of imminent AI superintelligence, highlighting job security anxieties as 18% of inquiries focus on AI's impact on employment.

Top Stories

OpenAI refocuses on business solutions, launching new AI model Spud to boost profitability as corporate revenue grows from 20% to 40% in just over...

AI Regulation

Anthropic, co-founded by Dario Amodei, advances AI safety with its innovative Constitutional AI framework, promoting ethical guidelines for reliable technology.

© 2025 AIPressa · Part of Buzzora Media · All rights reserved. This website provides general news and educational content for informational purposes only. While we strive for accuracy, we do not guarantee the completeness or reliability of the information presented. The content should not be considered professional advice of any kind. Readers are encouraged to verify facts and consult appropriate experts when needed. We are not responsible for any loss or inconvenience resulting from the use of information on this site. Some images used on this website are generated with artificial intelligence and are illustrative in nature. They may not accurately represent the products, people, or events described in the articles.