Connect with us

Hi, what are you looking for?

AI Regulation

Anthropic Rejects $200M Pentagon Contract Changes, Citing AI Safety Concerns

Anthropic rejects the Pentagon’s proposed changes to a $200M AI contract, prioritizing safeguards against misuse for surveillance and autonomous weapons.

Anthropic rejects the Pentagon's proposed changes to a $200M AI contract, prioritizing safeguards against misuse for surveillance and autonomous weapons.

Anthropic has declined the Pentagon’s latest attempt to modify a significant artificial intelligence contract, citing concerns that the proposed changes would undermine safeguards against the misuse of its technology for mass surveillance and fully autonomous weapons. The decision comes amid increasing scrutiny of the company for potentially relaxing its own AI safety protocols.

In a recent communication, Defense Secretary Pete Hegseth conveyed to Anthropic CEO Dario Amodei that the company must permit the use of its AI system, Claude, “for all lawful purposes,” or risk termination of a contract valued at approximately $200 million that governs Claude’s deployment on the U.S. military’s classified networks. Hegseth warned that failing to comply would classify Anthropic as a “supply chain risk,” a designation typically associated with companies linked to foreign adversaries, according to reports.

The Pentagon’s proposed revisions to the contract were framed as a compromise aimed at addressing Anthropic’s apprehensions. However, the company asserted that the language contained numerous legal ambiguities that could potentially be exploited to bypass existing protections. In a statement, Anthropic emphasized its unwillingness to accept terms that might facilitate extensive civilian surveillance or allow AI systems to operate weapons autonomously without human oversight.

Amodei maintained that the firm would not alter its principles under governmental pressure, asserting that the threats from the Pentagon “do not” change Anthropic’s position. He reiterated that the company cannot “in good conscience” acquiesce to the Defense Department’s demands.

The crux of the disagreement centers on the extent to which military users can utilize Anthropic’s technology. While the Pentagon seeks the flexibility to employ Claude in a wide array of classified operations, provided they adhere to U.S. and international law, Anthropic has delineated strict boundaries regarding applications such as targeting, autonomous weapon control, and large-scale monitoring of U.S. citizens. The company argues that current AI systems lack the reliability necessary for such uses and that existing regulations do not adequately govern AI-driven surveillance.

Insiders reveal that these fundamental disagreements have prevented progress in negotiations for months, and the collapse of discussions increases the likelihood that Anthropic may lose both this contract and future opportunities with the Pentagon. This conflict has also brought to the forefront broader questions regarding how technology firms can establish ethical boundaries while engaging with national security agencies.

The current contract dispute unfolds alongside Anthropic’s own revisions to its internal safety framework, which critics argue could undermine its public commitments to AI safety. The company recently overhauled its two-year-old Responsible Scaling Policy, which mandated a pause on the training of more advanced models should their capabilities surpass the company’s safety management capacity. This explicit pause requirement has now been eliminated.

Anthropic defends this shift by positing that halting development while more aggressive competitors continue could ultimately render the world “less safe” by allowing less responsible actors to dominate the AI landscape. Rather than a rigid framework enforcing a halt, the new guidelines are articulated as a flexible and nonbinding approach that can adapt as the field of AI progresses.

A company spokesperson asserted that these updated rules aim to enhance transparency and accountability, with Anthropic committing to publish regular, detailed reports outlining risk-mitigation strategies and the capabilities of all its systems. They argue that the rapidly evolving nature of AI necessitates frequent adjustments to their safety protocols.

The timing of the policy revision—shortly after Hegseth cautioned Anthropic to relax its safety limits or face potential blacklisting—has drawn criticism from advocacy groups and industry observers alike. Critics contend that the removal of a firm commitment to pause potentially dangerous models underscores the inadequacy of voluntary safeguards, urging governments to codify AI safety standards into law.

While Anthropic asserts that its standoff with the Pentagon illustrates its readiness to forgo lucrative government contracts to uphold core safety tenets regarding weapons and surveillance, the company’s newly flexible internal policy raises questions about the durability of those principles amid intensifying competition for AI supremacy.

See also
Staff
Written By

The AiPressa Staff team brings you comprehensive coverage of the artificial intelligence industry, including breaking news, research developments, business trends, and policy updates. Our mission is to keep you informed about the rapidly evolving world of AI technology.

You May Also Like

Top Stories

Perplexity unveils 'Computer,' a game-changing multi-agent AI system that orchestrates complex workflows, enhancing productivity and security for enterprises.

AI Government

Over 100 Google employees urge the company to reject military ties as Anthropic resists Pentagon pressure despite a $200 million contract.

AI Education

Top U.S. education tech firms like Renaissance and Clever are redefining learning with AI-driven solutions, propelling a $200 billion market towards personalized digital experiences.

AI Cybersecurity

PNNL and Anthropic launch ALOHA, slashing cyber attack replication time from weeks to hours, drastically reducing costs for critical infrastructure defense.

AI Generative

Douyin introduces a long-form writing feature for its 907 million users, aiming to enhance content quality and engagement while leveraging AI technology.

Top Stories

Perplexity unveils "Computer," a cloud-based AI tool that orchestrates multi-agent workflows securely, optimizing productivity for Max subscribers with powerful models.

AI Technology

Block lays off 40% of its workforce, reducing headcount to under 6,000, while committing to hire senior AI engineers to enhance productivity.

AI Cybersecurity

Chinese threat actors exploited Anthropic's Claude model to execute the first large-scale AI cyberattack, targeting 30 organizations globally with minimal human intervention.

© 2025 AIPressa · Part of Buzzora Media · All rights reserved. This website provides general news and educational content for informational purposes only. While we strive for accuracy, we do not guarantee the completeness or reliability of the information presented. The content should not be considered professional advice of any kind. Readers are encouraged to verify facts and consult appropriate experts when needed. We are not responsible for any loss or inconvenience resulting from the use of information on this site. Some images used on this website are generated with artificial intelligence and are illustrative in nature. They may not accurately represent the products, people, or events described in the articles.