Connect with us

Hi, what are you looking for?

AI Regulation

Pentagon Bans Anthropic After AI Ethics Dispute Ahead of Iran Strikes

Pentagon bans Anthropic after ethical AI dispute, while OpenAI secures a deal for military use without restrictions, raising concerns over AI governance.

In a tense backdrop of escalating military actions, the U.S. Department of Defense was engaged in last-minute negotiations with the artificial intelligence (AI) firm Anthropic regarding the use of its technology. As the U.S. and Israeli forces prepared to strike Iran, Anthropic sought assurances that its Claude AI systems would not be employed for domestic surveillance or in fully autonomous weaponry. On Friday, U.S. President Donald Trump responded by ordering all federal agencies to stop using Anthropic’s technology, stating he would “never allow a radical left, woke company to dictate how our great military fights and wins wars.”

In a swift turnaround, hours later, rival AI entity OpenAI, known for its creation of ChatGPT, announced it had finalized a deal with the Department of Defense. The key differentiator in its agreement was OpenAI’s allowance for “all lawful uses” of its technologies, without imposing any ethical boundaries. This raises critical questions about the future of military AI and whether the concept of “ethical AI” in warfare is being abandoned.

The events unfold amid growing concerns over AI ethics, particularly in military contexts. The Trump administration previously prohibited states from regulating AI, arguing that such oversight threatens innovation. Many AI firms have aligned themselves with the administration, with executives like OpenAI’s Sam Altman making substantial donations to Trump’s inauguration fund, while Anthropic has taken a more cautious approach, emphasizing the potential for AI to undermine democratic values.

Internationally, there had been a nascent consensus regarding the military applications of AI, particularly concerning lethal autonomous weapon systems capable of selecting and engaging targets without human intervention. Just a few years ago, in February 2020, the U.S. Department of Defense announced guiding principles for AI use, emphasizing the need for responsibility, equity, traceability, reliability, and governance. Similar principles were echoed by NATO in 2021 and the United Kingdom in 2022, signaling to global counterparts—such as Russia, China, Brazil, and India—the norms the U.S. and its allies believed should govern military AI.

The reliance on private sector innovation for military AI continues to grow, with companies like Anthropic and OpenAI at the forefront of this technological evolution. Projects like Project Maven, initiated in 2017, spotlight the military’s dependence on commercial tech firms to enhance machine learning and data integration in military intelligence. The U.S. Defense Innovation Board has noted that critical data and expertise in AI are predominantly held by private enterprises, a reality that persists.

The political landscape surrounding Silicon Valley shifted dramatically after Trump’s re-election in 2024, with many in the tech community expressing relief at the prospect of reduced regulation. Influential figures like billionaire venture capitalist Marc Andreessen voiced optimism about the new administration’s impact on innovation, while OpenAI’s president, Greg Brockman, contributed $25 million to a pro-Trump organization. This evolution starkly contrasts with the ethical discussions of 2019 and 2020.

The concept of ethical AI is often framed around democratic principles, suggesting that transparency and clear decision-making processes are vital for ethical military applications. Yet, when AI is deployed in autocratic regimes, the relevance of transparency diminishes, as the public has no stake in governmental actions. The discourse surrounding ethical AI assumes a well-informed citizenry, which is essential for a democracy to function effectively. Healthy democratic deliberation often values constructive disagreement and conflict as signs of a robust society.

Anthropic’s insistence on deliberating ethical boundaries with the government exemplifies democratic engagement. However, the Trump administration labeled the company a “supply chain risk,” a designation typically reserved for foreign entities. Secretary of Defense Pete Hegseth announced that effective immediately, no U.S. military contractors could engage in business with Anthropic. The company plans to contest this designation in court, citing the potential for severe economic and reputational repercussions.

In contrast, OpenAI appears committed to operating without ethical limitations, adhering only to legal constraints. This aligns the company with government interests, though it may face reputational backlash from consumers increasingly concerned about ethical considerations in AI technology.

The implications for ethical military AI are profound. The prevailing conclusion is that for military AI to be used ethically—following transparent rules and laws—robust democratic norms must be upheld. However, as the established international order faces challenges, these norms are increasingly at risk. The rapid developments illustrate a shift in military strategy, with reports indicating that U.S. strikes on Iran were coordinated in part using Anthropic software mere hours after Trump’s condemnation, underscoring the complex interplay between technology, ethics, and geopolitics.

See also
Staff
Written By

The AiPressa Staff team brings you comprehensive coverage of the artificial intelligence industry, including breaking news, research developments, business trends, and policy updates. Our mission is to keep you informed about the rapidly evolving world of AI technology.

You May Also Like

AI Technology

Taiwan's GDP skyrocketed by 23.6% in Q4 2025, driven by soaring AI chip demand from TSMC, propelling exports past $63 billion monthly and reshaping...

AI Technology

LLNL unveils El Capitan, the world's fastest supercomputer with nearly two exaFLOPs, revolutionizing HPC innovation and enhancing U.S. competitiveness.

AI Generative

Alibaba's Qwen AI project faces uncertainty as key leader Junyang Lin departs immediately after launching the Qwen 3.5 Small Model series with up to...

Top Stories

Major U.S. defense firms, including Lockheed Martin, are set to phase out Anthropic's AI tools following a federal ban imposed by Trump, citing national...

Top Stories

Nvidia CEO Jensen Huang announces the company will cease investments in OpenAI and Anthropic, signaling a strategic pivot amid growing competition in AI services.

AI Research

Kimi AI launches an innovative research tool that automates literature reviews, document drafting, and presentations, aiming to enhance academic efficiency against established competitors.

AI Marketing

Dabudai unveils an AI visibility platform to help brands optimize their presence in AI-driven search, ensuring vital recognition in a shifting digital landscape.

Top Stories

OpenAI faces backlash as 50 protesters rally against its Pentagon partnership, sparking a shift in user preference toward rival Anthropic's Claude model.

© 2025 AIPressa · Part of Buzzora Media · All rights reserved. This website provides general news and educational content for informational purposes only. While we strive for accuracy, we do not guarantee the completeness or reliability of the information presented. The content should not be considered professional advice of any kind. Readers are encouraged to verify facts and consult appropriate experts when needed. We are not responsible for any loss or inconvenience resulting from the use of information on this site. Some images used on this website are generated with artificial intelligence and are illustrative in nature. They may not accurately represent the products, people, or events described in the articles.