Connect with us

Hi, what are you looking for?

AI Regulation

Anthropic Rejects $200M Pentagon Contract Changes, Citing AI Safety Concerns

Anthropic rejects the Pentagon’s proposed changes to a $200M AI contract, prioritizing safeguards against misuse for surveillance and autonomous weapons.

Anthropic rejects the Pentagon's proposed changes to a $200M AI contract, prioritizing safeguards against misuse for surveillance and autonomous weapons.

Anthropic has declined the Pentagon’s latest attempt to modify a significant artificial intelligence contract, citing concerns that the proposed changes would undermine safeguards against the misuse of its technology for mass surveillance and fully autonomous weapons. The decision comes amid increasing scrutiny of the company for potentially relaxing its own AI safety protocols.

In a recent communication, Defense Secretary Pete Hegseth conveyed to Anthropic CEO Dario Amodei that the company must permit the use of its AI system, Claude, “for all lawful purposes,” or risk termination of a contract valued at approximately $200 million that governs Claude’s deployment on the U.S. military’s classified networks. Hegseth warned that failing to comply would classify Anthropic as a “supply chain risk,” a designation typically associated with companies linked to foreign adversaries, according to reports.

The Pentagon’s proposed revisions to the contract were framed as a compromise aimed at addressing Anthropic’s apprehensions. However, the company asserted that the language contained numerous legal ambiguities that could potentially be exploited to bypass existing protections. In a statement, Anthropic emphasized its unwillingness to accept terms that might facilitate extensive civilian surveillance or allow AI systems to operate weapons autonomously without human oversight.

Amodei maintained that the firm would not alter its principles under governmental pressure, asserting that the threats from the Pentagon “do not” change Anthropic’s position. He reiterated that the company cannot “in good conscience” acquiesce to the Defense Department’s demands.

The crux of the disagreement centers on the extent to which military users can utilize Anthropic’s technology. While the Pentagon seeks the flexibility to employ Claude in a wide array of classified operations, provided they adhere to U.S. and international law, Anthropic has delineated strict boundaries regarding applications such as targeting, autonomous weapon control, and large-scale monitoring of U.S. citizens. The company argues that current AI systems lack the reliability necessary for such uses and that existing regulations do not adequately govern AI-driven surveillance.

Insiders reveal that these fundamental disagreements have prevented progress in negotiations for months, and the collapse of discussions increases the likelihood that Anthropic may lose both this contract and future opportunities with the Pentagon. This conflict has also brought to the forefront broader questions regarding how technology firms can establish ethical boundaries while engaging with national security agencies.

The current contract dispute unfolds alongside Anthropic’s own revisions to its internal safety framework, which critics argue could undermine its public commitments to AI safety. The company recently overhauled its two-year-old Responsible Scaling Policy, which mandated a pause on the training of more advanced models should their capabilities surpass the company’s safety management capacity. This explicit pause requirement has now been eliminated.

Anthropic defends this shift by positing that halting development while more aggressive competitors continue could ultimately render the world “less safe” by allowing less responsible actors to dominate the AI landscape. Rather than a rigid framework enforcing a halt, the new guidelines are articulated as a flexible and nonbinding approach that can adapt as the field of AI progresses.

A company spokesperson asserted that these updated rules aim to enhance transparency and accountability, with Anthropic committing to publish regular, detailed reports outlining risk-mitigation strategies and the capabilities of all its systems. They argue that the rapidly evolving nature of AI necessitates frequent adjustments to their safety protocols.

The timing of the policy revision—shortly after Hegseth cautioned Anthropic to relax its safety limits or face potential blacklisting—has drawn criticism from advocacy groups and industry observers alike. Critics contend that the removal of a firm commitment to pause potentially dangerous models underscores the inadequacy of voluntary safeguards, urging governments to codify AI safety standards into law.

While Anthropic asserts that its standoff with the Pentagon illustrates its readiness to forgo lucrative government contracts to uphold core safety tenets regarding weapons and surveillance, the company’s newly flexible internal policy raises questions about the durability of those principles amid intensifying competition for AI supremacy.

See also
Staff
Written By

The AiPressa Staff team brings you comprehensive coverage of the artificial intelligence industry, including breaking news, research developments, business trends, and policy updates. Our mission is to keep you informed about the rapidly evolving world of AI technology.

You May Also Like

AI Finance

RBI's Swaminathan warns that opaque AI systems in finance could undermine trust and accountability, urging immediate regulatory frameworks for responsible use.

AI Technology

Anthropic's Mythos AI model, deemed capable of executing complex cyber attacks, sparks urgent meetings among U.S. banking leaders over unprecedented global financial risks.

AI Business

Kyndryl empowers 50% of its workforce to develop AI agents, achieving over 45 million actions in six months and transforming productivity across the enterprise

AI Technology

Mateo's generative AI agent launches in Coquitlam, cutting data analysis time by 90%, empowering urban planners to focus on strategic decision-making.

AI Technology

DEP unveils AIWorks, an AI-driven platform that cuts simulation times from hours to minutes, revolutionizing engineering efficiency across multiple sectors.

AI Education

China launches a national AI education strategy to integrate artificial intelligence into all educational levels, ensuring a future-ready workforce and global tech competitiveness.

AI Generative

Synthetic media market poised for explosive growth, reaching $48.55B by 2033, driven by AI innovations from leaders like OpenAI and Adobe.

Top Stories

Therapists are urged to explore patients' AI chatbot use for emotional support, as a JAMA Psychiatry study reveals its growing role in mental health...

© 2025 AIPressa · Part of Buzzora Media · All rights reserved. This website provides general news and educational content for informational purposes only. While we strive for accuracy, we do not guarantee the completeness or reliability of the information presented. The content should not be considered professional advice of any kind. Readers are encouraged to verify facts and consult appropriate experts when needed. We are not responsible for any loss or inconvenience resulting from the use of information on this site. Some images used on this website are generated with artificial intelligence and are illustrative in nature. They may not accurately represent the products, people, or events described in the articles.