The US military utilized Claude, an AI model developed by Anthropic, during its recent operations against Iran, despite former President Donald Trump ordering federal agencies to cease using the AI tool just hours prior. The reported use of Claude in the ongoing joint bombardment with Israel began on Saturday, highlighting the complications of withdrawing advanced AI technologies that are deeply integrated into military operations.
According to reports from the Wall Street Journal and Axios, US military command deployed Claude for intelligence-gathering, target selection, and battlefield simulations. The situation underscores the challenges faced by the Pentagon in disentangling military systems from AI tools already embedded in their operations.
On the day of the assault, Trump publicly criticized Anthropic on his social media platform, Truth Social, labeling the company a “Radical Left AI company run by people who have no idea what the real World is all about.” Trump’s decision to sever ties with the company followed a contentious episode in January, when the military used Claude in an operation to capture Venezuelan President Nicolás Maduro. Anthropic had objected, citing its terms of use that prohibit the application of its AI for violent purposes, weapon development, or surveillance.
The fallout from that incident has led to a deterioration of relations among Trump, the Pentagon, and Anthropic. In a lengthy post on social media platform X, Secretary of Defense Pete Hegseth accused the company of “arrogance and betrayal,” asserting that “America’s warfighters will never be held hostage by the ideological whims of Big Tech.” He called for full and unrestricted access to all of Anthropic’s AI models for lawful military purposes.
Despite the rift, Hegseth acknowledged the practical difficulties in quickly discontinuing the use of Claude, given its widespread application within the military. He indicated that Anthropic would continue to provide its services for a transitional period of no more than six months to facilitate a shift to a different AI provider.
In the wake of the separation from Anthropic, OpenAI, a competing AI company, has stepped in to fill the void. CEO Sam Altman announced that he had reached an agreement with the Pentagon for the use of OpenAI’s tools, including ChatGPT, in its classified networks. This development marks a significant shift in the military’s AI strategy and raises questions about the future role of AI in defense operations.
The tension surrounding the military’s use of AI technologies reflects broader concerns about the application of advanced technologies in conflict. As military operations increasingly rely on AI for strategic advantages, the ethical implications and governance of such tools will undoubtedly remain at the forefront of discussions within both the tech and defense sectors. The unfolding situation not only illustrates the complexities of integrating AI in military contexts but also sets the stage for potential shifts in regulatory and operational frameworks as the US navigates its relationship with emerging technologies.
See also
Microsoft 365 Conference Reveals Key Strategies for Scaling AI with Copilot Sessions
Germany”s National Team Prepares for World Cup Qualifiers with Disco Atmosphere
95% of AI Projects Fail in Companies According to MIT
AI in Food & Beverages Market to Surge from $11.08B to $263.80B by 2032
Satya Nadella Supports OpenAI’s $100B Revenue Goal, Highlights AI Funding Needs
















































