Connect with us

Hi, what are you looking for?

Top Stories

US Military Uses Anthropic’s Claude in Iran Strikes Despite Trump’s Ban

US military defies Trump’s ban, deploying Anthropic’s Claude AI for intelligence in Iran strikes while planning a shift to OpenAI’s tools amid rising tensions.

The US military utilized Claude, an AI model developed by Anthropic, during its recent operations against Iran, despite former President Donald Trump ordering federal agencies to cease using the AI tool just hours prior. The reported use of Claude in the ongoing joint bombardment with Israel began on Saturday, highlighting the complications of withdrawing advanced AI technologies that are deeply integrated into military operations.

According to reports from the Wall Street Journal and Axios, US military command deployed Claude for intelligence-gathering, target selection, and battlefield simulations. The situation underscores the challenges faced by the Pentagon in disentangling military systems from AI tools already embedded in their operations.

On the day of the assault, Trump publicly criticized Anthropic on his social media platform, Truth Social, labeling the company a “Radical Left AI company run by people who have no idea what the real World is all about.” Trump’s decision to sever ties with the company followed a contentious episode in January, when the military used Claude in an operation to capture Venezuelan President Nicolás Maduro. Anthropic had objected, citing its terms of use that prohibit the application of its AI for violent purposes, weapon development, or surveillance.

The fallout from that incident has led to a deterioration of relations among Trump, the Pentagon, and Anthropic. In a lengthy post on social media platform X, Secretary of Defense Pete Hegseth accused the company of “arrogance and betrayal,” asserting that “America’s warfighters will never be held hostage by the ideological whims of Big Tech.” He called for full and unrestricted access to all of Anthropic’s AI models for lawful military purposes.

Despite the rift, Hegseth acknowledged the practical difficulties in quickly discontinuing the use of Claude, given its widespread application within the military. He indicated that Anthropic would continue to provide its services for a transitional period of no more than six months to facilitate a shift to a different AI provider.

In the wake of the separation from Anthropic, OpenAI, a competing AI company, has stepped in to fill the void. CEO Sam Altman announced that he had reached an agreement with the Pentagon for the use of OpenAI’s tools, including ChatGPT, in its classified networks. This development marks a significant shift in the military’s AI strategy and raises questions about the future role of AI in defense operations.

The tension surrounding the military’s use of AI technologies reflects broader concerns about the application of advanced technologies in conflict. As military operations increasingly rely on AI for strategic advantages, the ethical implications and governance of such tools will undoubtedly remain at the forefront of discussions within both the tech and defense sectors. The unfolding situation not only illustrates the complexities of integrating AI in military contexts but also sets the stage for potential shifts in regulatory and operational frameworks as the US navigates its relationship with emerging technologies.

See also
Staff
Written By

The AiPressa Staff team brings you comprehensive coverage of the artificial intelligence industry, including breaking news, research developments, business trends, and policy updates. Our mission is to keep you informed about the rapidly evolving world of AI technology.

You May Also Like

Top Stories

Anthropic accuses DeepSeek and two other Chinese firms of executing 16 million distillation attacks to illegally enhance their AI models, threatening U.S. tech dominance.

AI Research

New research reveals that high-income countries utilize AI tools four times more than middle- and low-income nations, highlighting a critical global adoption gap.

AI Government

Trump halts all federal use of Anthropic's AI tools, citing security concerns over unrestricted access to the company's chatbot, Claude, within six months.

AI Generative

Generative AI models like GPT-4 surpass average human creativity in tests, yet top 10% of humans excel in complex tasks like poetry and storytelling

AI Cybersecurity

Hackers exfiltrated 150 GB of sensitive data from Mexican government agencies by exploiting Anthropic’s Claude AI, marking a pivotal moment in AI-enabled cybercrime.

AI Regulation

Anthropic faces ethical scrutiny after being blacklisted for rejecting military AI contracts, highlighting the perilous gap in self-regulation amid competitive pressures.

AI Marketing

OpenAI secures Pentagon deal to deploy AI models amid Anthropic's blacklisting over supply-chain risks, raising ethical concerns in military applications.

AI Business

OpenAI finalizes a Pentagon deal to deploy AI models on military networks, amid Trump's mandate to phase out Anthropic's technology for national security.

© 2025 AIPressa · Part of Buzzora Media · All rights reserved. This website provides general news and educational content for informational purposes only. While we strive for accuracy, we do not guarantee the completeness or reliability of the information presented. The content should not be considered professional advice of any kind. Readers are encouraged to verify facts and consult appropriate experts when needed. We are not responsible for any loss or inconvenience resulting from the use of information on this site. Some images used on this website are generated with artificial intelligence and are illustrative in nature. They may not accurately represent the products, people, or events described in the articles.