Connect with us

Hi, what are you looking for?

AI Regulation

Pentagon Bans Anthropic Over AI Ethics Rules, Firm Plans Legal Challenge

Pentagon bans Anthropic as a defense contractor over AI ethics rules, prompting CEO Dario Amodei to announce plans for a legal challenge against the designation.

Defense Secretary Pete Hegseth announced on Friday that the Pentagon has officially designated Anthropic as a supply chain risk, stating that “effective immediately, no contractor, supplier, or partner that does business with the United States military may conduct any commercial activity with Anthropic.” Following the announcement, the Pentagon issued a statement confirming this designation, which it said had been communicated to Anthropic leadership.

In response, Anthropic CEO Dario Amodei declared that the company plans to challenge this decision in court, arguing that it lacks a legal basis. According to Amodei, the supply chain risk designations outlined under authority 10 USC 3252 pertain specifically to Department of Defense contracts and do not extend to how contractors utilize the company’s Claude software for non-defense customers.

The conflict arises from protracted negotiations in which Anthropic sought specific exceptions within its contracts to prohibit domestic surveillance and the use of autonomous weapons. The Pentagon, however, insisted on unrestricted access for “all lawful uses” of its technology.

“The military will not allow a vendor to insert itself into the chain of command by restricting the lawful use of a critical capability,” stated a Pentagon official.

Hours after Anthropic’s designation, rival OpenAI announced it had secured a contract with the Department of Defense to deploy its AI models within classified environments. OpenAI CEO Sam Altman indicated that their agreement includes stipulations against domestic mass surveillance and mandates human accountability for the use of force. Altman later described his initial agreement as “opportunistic and sloppy,” suggesting that negotiations had to be amended to reflect necessary safeguards.

Major defense contractors are already reacting to the new guidelines. Lockheed Martin expressed its intent to “follow the President’s and the Department of War’s direction,” indicating a shift towards engaging other large language model providers. Meanwhile, Microsoft stated that legal assessments confirmed they could continue collaborating with Anthropic on projects unrelated to defense.

The Pentagon’s move has stirred considerable unease in Silicon Valley. Former White House AI policy adviser Dean Ball termed the designation “the most shocking, damaging, and overreaching thing I have ever seen the United States government do.” OpenAI researcher Boaz Barak echoed this sentiment, stating that “kneecapping one of our leading AI companies is right about the worst own goal we can do.”

Despite losing access to defense partnerships, Anthropic is experiencing a surge in consumer interest, with over one million downloads of the Claude application occurring daily this week, according to company data. This spike has positioned Anthropic as the top AI app across Apple App Stores in more than 20 countries, attributed in part to the company’s public stance against unrestricted military applications of its technology.

Critics of the Pentagon’s designation, including Senator Kirsten Gillibrand, have labeled the action “shortsighted, self-destructive, and a gift to our adversaries,” arguing that such tools were initially intended to combat foreign threats rather than penalize American firms over ethical disagreements. A group of former defense officials, including ex-CIA director Michael Hayden, called the designation “a profound departure from its intended purpose” and warned of the dangerous precedent it sets.

Notably, former President Donald Trump had previously directed federal agencies to halt the use of Anthropic technology following public criticism from members of his administration. Some individuals within the Trump administration reportedly viewed Amodei unfavorably, based on perceptions that he had not contributed financially or vocally supported Trump like many other tech leaders.

As the situation unfolds, the implications for Anthropic and its competitors could reshape the landscape of AI development, particularly concerning ethical practices in military applications. The recent events highlight the growing tension between technological advancement and regulatory frameworks, raising questions about the future of AI partnerships with government entities.

See also
Staff
Written By

The AiPressa Staff team brings you comprehensive coverage of the artificial intelligence industry, including breaking news, research developments, business trends, and policy updates. Our mission is to keep you informed about the rapidly evolving world of AI technology.

You May Also Like

Top Stories

FIRE challenges the Pentagon's First Amendment violation against Anthropic, claiming its designation as a supply chain risk threatens ethical AI governance and innovation.

AI Tools

Anthropic's Claude Opus 4.6 identifies security vulnerabilities in Firefox's codebase 300% faster than human analysts, while cURL faces a surge of low-quality AI-generated reports.

Top Stories

Microsoft launches Copilot Cowork, integrating Anthropic's AI to automate complex workflows, enhancing enterprise productivity with advanced security measures.

AI Government

Anthropic sues the Pentagon over a national security designation that could cost the company $2 billion by 2026, challenging its implications for AI governance.

Top Stories

Ondas Networks merges with defense contractor Mistral to enhance military procurement access and integrate autonomous systems for U.S. defense, expanding contract opportunities.

Top Stories

Over 30 OpenAI and Google DeepMind employees support Anthropic's lawsuit against the DOD, risking national security and AI ethics amid technology misuse concerns.

AI Regulation

Anthropic sues the Pentagon for $1 billion, alleging First Amendment violations and retaliation after being labeled a supply chain risk for its AI safety...

AI Government

Anthropic sues the U.S. government, claiming retaliation over its AI model Claude, after being labeled a national security risk for refusing military demands.

© 2025 AIPressa · Part of Buzzora Media · All rights reserved. This website provides general news and educational content for informational purposes only. While we strive for accuracy, we do not guarantee the completeness or reliability of the information presented. The content should not be considered professional advice of any kind. Readers are encouraged to verify facts and consult appropriate experts when needed. We are not responsible for any loss or inconvenience resulting from the use of information on this site. Some images used on this website are generated with artificial intelligence and are illustrative in nature. They may not accurately represent the products, people, or events described in the articles.