Anthropic has declined the Pentagon’s latest attempt to modify a significant artificial intelligence contract, citing concerns that the proposed changes would undermine safeguards against the misuse of its technology for mass surveillance and fully autonomous weapons. The decision comes amid increasing scrutiny of the company for potentially relaxing its own AI safety protocols.
In a recent communication, Defense Secretary Pete Hegseth conveyed to Anthropic CEO Dario Amodei that the company must permit the use of its AI system, Claude, “for all lawful purposes,” or risk termination of a contract valued at approximately $200 million that governs Claude’s deployment on the U.S. military’s classified networks. Hegseth warned that failing to comply would classify Anthropic as a “supply chain risk,” a designation typically associated with companies linked to foreign adversaries, according to reports.
The Pentagon’s proposed revisions to the contract were framed as a compromise aimed at addressing Anthropic’s apprehensions. However, the company asserted that the language contained numerous legal ambiguities that could potentially be exploited to bypass existing protections. In a statement, Anthropic emphasized its unwillingness to accept terms that might facilitate extensive civilian surveillance or allow AI systems to operate weapons autonomously without human oversight.
Amodei maintained that the firm would not alter its principles under governmental pressure, asserting that the threats from the Pentagon “do not” change Anthropic’s position. He reiterated that the company cannot “in good conscience” acquiesce to the Defense Department’s demands.
The crux of the disagreement centers on the extent to which military users can utilize Anthropic’s technology. While the Pentagon seeks the flexibility to employ Claude in a wide array of classified operations, provided they adhere to U.S. and international law, Anthropic has delineated strict boundaries regarding applications such as targeting, autonomous weapon control, and large-scale monitoring of U.S. citizens. The company argues that current AI systems lack the reliability necessary for such uses and that existing regulations do not adequately govern AI-driven surveillance.
Insiders reveal that these fundamental disagreements have prevented progress in negotiations for months, and the collapse of discussions increases the likelihood that Anthropic may lose both this contract and future opportunities with the Pentagon. This conflict has also brought to the forefront broader questions regarding how technology firms can establish ethical boundaries while engaging with national security agencies.
The current contract dispute unfolds alongside Anthropic’s own revisions to its internal safety framework, which critics argue could undermine its public commitments to AI safety. The company recently overhauled its two-year-old Responsible Scaling Policy, which mandated a pause on the training of more advanced models should their capabilities surpass the company’s safety management capacity. This explicit pause requirement has now been eliminated.
Anthropic defends this shift by positing that halting development while more aggressive competitors continue could ultimately render the world “less safe” by allowing less responsible actors to dominate the AI landscape. Rather than a rigid framework enforcing a halt, the new guidelines are articulated as a flexible and nonbinding approach that can adapt as the field of AI progresses.
A company spokesperson asserted that these updated rules aim to enhance transparency and accountability, with Anthropic committing to publish regular, detailed reports outlining risk-mitigation strategies and the capabilities of all its systems. They argue that the rapidly evolving nature of AI necessitates frequent adjustments to their safety protocols.
The timing of the policy revision—shortly after Hegseth cautioned Anthropic to relax its safety limits or face potential blacklisting—has drawn criticism from advocacy groups and industry observers alike. Critics contend that the removal of a firm commitment to pause potentially dangerous models underscores the inadequacy of voluntary safeguards, urging governments to codify AI safety standards into law.
While Anthropic asserts that its standoff with the Pentagon illustrates its readiness to forgo lucrative government contracts to uphold core safety tenets regarding weapons and surveillance, the company’s newly flexible internal policy raises questions about the durability of those principles amid intensifying competition for AI supremacy.
See also
OpenAI’s Rogue AI Safeguards: Decoding the 2025 Safety Revolution
US AI Developments in 2025 Set Stage for 2026 Compliance Challenges and Strategies
Trump Drafts Executive Order to Block State AI Regulations, Centralizing Authority Under Federal Control
California Court Rules AI Misuse Heightens Lawyer’s Responsibilities in Noland Case
Policymakers Urged to Establish Comprehensive Regulations for AI in Mental Health














































