Connect with us

Hi, what are you looking for?

Top Stories

Pentagon Considers Supply Chain Risk Designation for Anthropic Amid Tensions

Pentagon plans to designate Anthropic a “supply chain risk,” jeopardizing contracts with eight of the ten largest U.S. companies using its AI model, Claude.

Defense Secretary Pete Hegseth is reportedly furious with AI company **Anthropic**, as the **Pentagon** approaches a decision to sever business ties and designate the firm a “supply chain risk”. This designation would require any company wishing to contract with the U.S. military to cut their connections with Anthropic, according to a report from Axios citing a senior Pentagon official. The implications of this designation are significant; it is typically reserved for foreign adversaries and would compel numerous companies to certify they do not use Anthropic’s AI model, **Claude**, in their operations. With Anthropic claiming that eight of the ten largest U.S. companies utilize Claude, the ramifications could be extensive.

The collapse of negotiations comes after months of contentious discussions over the terms for military use of Claude. Senior defense officials have expressed growing frustration with the company, particularly following CEO **Dario Amodei**’s lengthy post detailing concerns about potential issues with AI technology. Sources familiar with the situation indicate that the Pentagon officials seized this opportunity to make their discontent public. Notably, Claude is currently the only AI model operating within the classified systems of the U.S. military and has been recognized for its capabilities across various business applications.

In a response to the ongoing tensions, Pentagon spokesman **Sean Parnell** stated in an email to Axios that the Department of War’s relationship with Anthropic is under review. “Our nation requires that our partners be willing to help our warfighters win in any fight. Ultimately, this is about our troops and the safety of the American people,” he added. A senior Pentagon official described the situation as one that would create significant challenges, stating, “It will be an enormous pain in the ass to disentangle, and we are going to make sure they pay a price for forcing our hand like this.”

On the other hand, an Anthropic spokesperson claimed that the company remains in discussions with the Pentagon. “We are having productive conversations, in good faith, with DoW on how to continue that work and get these new and complex issues right,” the spokesperson said. They reiterated the firm’s commitment to leveraging **frontier AI** for national security, emphasizing that Claude was the first AI model to be integrated into classified networks.

Another official from Anthropic pointed out the existing legal boundaries regarding domestic mass surveillance, noting that although the laws have not yet adapted to the capabilities of current AI technologies, the Department of War is legally allowed to collect publicly available information, including data from social media and online forums. The official explained that while this practice had traditionally been limited by human analysis capacity, AI expands those capabilities significantly.

Last year, Anthropic secured a two-year agreement with the Pentagon that covered a prototype of AI’s Claude Gov models as well as Claude for Enterprise. Analysts suggest that the dynamics of these negotiations may play a pivotal role in shaping future discussions between the Pentagon and other AI firms, including **OpenAI**, **Google**, and **xAI**, which have not yet entered classified work. The Pentagon is reportedly in talks with these firms about extending their services into classified domains under the stipulation of adhering to “all lawful purposes” for both classified and unclassified activities.

The ongoing friction between the Pentagon and Anthropic underscores the strategic importance of AI technologies within national defense frameworks. As the U.S. military grapples with the complexities of integrating advanced AI systems like Claude, the outcome of these discussions may have lasting implications for how defense contractors engage with emerging technologies in the future. The evolving landscape of AI not only poses challenges but also reflects the urgent need for updated regulatory frameworks that can keep pace with rapid technological advancements.

See also
Staff
Written By

The AiPressa Staff team brings you comprehensive coverage of the artificial intelligence industry, including breaking news, research developments, business trends, and policy updates. Our mission is to keep you informed about the rapidly evolving world of AI technology.

You May Also Like

AI Business

Pentagon partners with OpenAI to integrate ChatGPT into GenAI.mil, granting 3 million personnel access to advanced AI capabilities for enhanced mission readiness.

AI Technology

CodePath partners with Anthropic to integrate Claude into AI courses, empowering low-income students to access high-demand skills with a 56% wage premium.

Top Stories

Anthropic's Claude Cowork triggers a $300 billion market shift as investors pivot to resilient sectors like Vertical SaaS and Cybersecurity amidst AI disruption.

AI Research

OpenAI and Anthropic unveil GPT-5.3 Codex and Opus 4.6, signaling a 100x productivity leap and reshaping white-collar jobs within 12 months.

AI Technology

OpenAI hires OpenClaw creator Peter Steinberger, sustaining the project's open-source status amidst fierce competition for AI engineering talent.

AI Marketing

Semrush reveals that AI-driven visitors from LLM search engines are worth 4.4 times more than those from organic search, prompting urgent SEO strategy shifts.

AI Regulation

Pentagon warns Anthropic to comply with AI safety standards or risk losing government support amid rising concerns over national security implications.

Top Stories

Pentagon considers ending partnership with Anthropic over AI ethics as the company resists military use of its models, prioritizing responsible technology governance

© 2025 AIPressa · Part of Buzzora Media · All rights reserved. This website provides general news and educational content for informational purposes only. While we strive for accuracy, we do not guarantee the completeness or reliability of the information presented. The content should not be considered professional advice of any kind. Readers are encouraged to verify facts and consult appropriate experts when needed. We are not responsible for any loss or inconvenience resulting from the use of information on this site. Some images used on this website are generated with artificial intelligence and are illustrative in nature. They may not accurately represent the products, people, or events described in the articles.