Defense Secretary Pete Hegseth announced on Friday that the Pentagon has officially designated Anthropic as a supply chain risk, stating that “effective immediately, no contractor, supplier, or partner that does business with the United States military may conduct any commercial activity with Anthropic.” Following the announcement, the Pentagon issued a statement confirming this designation, which it said had been communicated to Anthropic leadership.
In response, Anthropic CEO Dario Amodei declared that the company plans to challenge this decision in court, arguing that it lacks a legal basis. According to Amodei, the supply chain risk designations outlined under authority 10 USC 3252 pertain specifically to Department of Defense contracts and do not extend to how contractors utilize the company’s Claude software for non-defense customers.
The conflict arises from protracted negotiations in which Anthropic sought specific exceptions within its contracts to prohibit domestic surveillance and the use of autonomous weapons. The Pentagon, however, insisted on unrestricted access for “all lawful uses” of its technology.
“The military will not allow a vendor to insert itself into the chain of command by restricting the lawful use of a critical capability,” stated a Pentagon official.
Hours after Anthropic’s designation, rival OpenAI announced it had secured a contract with the Department of Defense to deploy its AI models within classified environments. OpenAI CEO Sam Altman indicated that their agreement includes stipulations against domestic mass surveillance and mandates human accountability for the use of force. Altman later described his initial agreement as “opportunistic and sloppy,” suggesting that negotiations had to be amended to reflect necessary safeguards.
Major defense contractors are already reacting to the new guidelines. Lockheed Martin expressed its intent to “follow the President’s and the Department of War’s direction,” indicating a shift towards engaging other large language model providers. Meanwhile, Microsoft stated that legal assessments confirmed they could continue collaborating with Anthropic on projects unrelated to defense.
The Pentagon’s move has stirred considerable unease in Silicon Valley. Former White House AI policy adviser Dean Ball termed the designation “the most shocking, damaging, and overreaching thing I have ever seen the United States government do.” OpenAI researcher Boaz Barak echoed this sentiment, stating that “kneecapping one of our leading AI companies is right about the worst own goal we can do.”
Despite losing access to defense partnerships, Anthropic is experiencing a surge in consumer interest, with over one million downloads of the Claude application occurring daily this week, according to company data. This spike has positioned Anthropic as the top AI app across Apple App Stores in more than 20 countries, attributed in part to the company’s public stance against unrestricted military applications of its technology.
Critics of the Pentagon’s designation, including Senator Kirsten Gillibrand, have labeled the action “shortsighted, self-destructive, and a gift to our adversaries,” arguing that such tools were initially intended to combat foreign threats rather than penalize American firms over ethical disagreements. A group of former defense officials, including ex-CIA director Michael Hayden, called the designation “a profound departure from its intended purpose” and warned of the dangerous precedent it sets.
Notably, former President Donald Trump had previously directed federal agencies to halt the use of Anthropic technology following public criticism from members of his administration. Some individuals within the Trump administration reportedly viewed Amodei unfavorably, based on perceptions that he had not contributed financially or vocally supported Trump like many other tech leaders.
As the situation unfolds, the implications for Anthropic and its competitors could reshape the landscape of AI development, particularly concerning ethical practices in military applications. The recent events highlight the growing tension between technological advancement and regulatory frameworks, raising questions about the future of AI partnerships with government entities.
See also
OpenAI’s Rogue AI Safeguards: Decoding the 2025 Safety Revolution
US AI Developments in 2025 Set Stage for 2026 Compliance Challenges and Strategies
Trump Drafts Executive Order to Block State AI Regulations, Centralizing Authority Under Federal Control
California Court Rules AI Misuse Heightens Lawyer’s Responsibilities in Noland Case
Policymakers Urged to Establish Comprehensive Regulations for AI in Mental Health






















































