Connect with us

Hi, what are you looking for?

AI Technology

OpenAI Secures $200M Defense AI Contract with Pentagon Amid Anthropic Controversy

OpenAI secures a $200M contract with the Pentagon to deploy AI systems in defense, imposing strict safeguards amid rising tensions with Anthropic.

OpenAI has entered a significant agreement with the U.S. Department of War to implement its artificial intelligence systems in classified defense scenarios. This development comes amid rising tensions between the Pentagon and AI startup Anthropic, which has faced scrutiny over its operations in defense contracting.

The deal, announced on February 28, 2026, marks a milestone for OpenAI, which has confirmed the integration of advanced AI systems into military frameworks. This announcement followed a directive from former President Donald Trump to restrict collaboration with Anthropic, which has been labeled a supply chain risk by the Pentagon.

OpenAI’s CEO, Sam Altman, underscored the safety measures embedded within the agreement, stating that the contract outlines three fundamental red lines. These stipulations prohibit the use of OpenAI technology for mass domestic surveillance, directing autonomous weapons systems, and making high-stakes automated decisions, particularly in contexts like social credit systems.

According to Altman, the Department of War has shown a profound commitment to safety and a collaborative approach, paving the way for a responsible integration of AI within military operations. This engagement is fortified by a layered safety system, which ensures that the deployment of AI is closely monitored and controlled, mitigating the potential for misuse.

The contract allows for the use of OpenAI’s systems for all lawful purposes but explicitly forbids independent direction of autonomous weapons where human oversight is necessary. OpenAI has confirmed compliance with existing legal frameworks, including the Fourth Amendment and the National Security Act of 1947, ensuring that its technology cannot facilitate unrestricted surveillance of U.S. citizens or support domestic law enforcement activities beyond legal boundaries.

OpenAI has also retained the right to terminate the agreement if the government violates its terms, though the company expressed confidence that such a scenario is unlikely.

This agreement arrives during a tumultuous period for Anthropic, which was co-founded by Dario Amodei, former research head at OpenAI. Anthropic has been in the spotlight for its refusal to permit unrestricted military applications of its AI tools, particularly in autonomous weapons and surveillance contexts. In response to the Pentagon’s designation of Anthropic as a supply chain risk, Trump criticized the company as a “radical left, woke” entity. Anthropic has signaled intentions to contest this label through legal means.

Notably, OpenAI has made it clear that it does not endorse the classification of Anthropic as a risk, advocating instead for de-escalation and collaboration across the industry. The firm has urged the Department of War to extend the same contractual protections to all AI companies to foster an environment of responsible technological development.

Altman reiterated OpenAI’s commitment to democracy and the belief that responsible collaboration is essential as AI becomes increasingly intertwined with national security. This call for industry-wide terms reflects an understanding of the complexities surrounding the deployment of AI technologies in sensitive environments.

The agreement with the Department of War represents a pivotal moment for AI in national defense. OpenAI is navigating a delicate balance of supporting military needs while upholding its ethical standards. By establishing clear boundaries and incorporating robust safeguards, the company aims to demonstrate that advanced AI can enhance national security without compromising ethical considerations.

As the standoff with Anthropic illustrates, the intersection of governmental interests and AI capabilities is becoming more contentious. The ongoing negotiations reveal a demand for flexibility from defense agencies while AI firms seek stringent ethical guidelines. The dynamic between security and safety will significantly influence the future trajectory of AI technology in defense policy, highlighting its newfound central role in shaping national security strategies.

See also
Staff
Written By

The AiPressa Staff team brings you comprehensive coverage of the artificial intelligence industry, including breaking news, research developments, business trends, and policy updates. Our mission is to keep you informed about the rapidly evolving world of AI technology.

You May Also Like

Top Stories

OpenAI enhances Codex with groundbreaking background operation and in-app browser features to compete with Anthropic's rising Claude Code for enterprise users.

AI Education

Anthropic unveils Claude Opus 4.7, enhancing coding and multimodal vision capabilities, now processing images at over three times the resolution of earlier models.

AI Tools

Clark Dean unveils the "Dean Machine," an AI-driven platform for voter engagement, allowing Georgians to interact directly with his policy positions 24/7.

AI Generative

ETH Zurich study reveals large language models can deanonymize users with up to 67% recall, raising alarms over online privacy effectiveness.

AI Regulation

OpenAI's David Lehane condemns 'doomer' narratives following a Molotov cocktail attack on CEO Sam Altman, urging for responsible AI discourse to prevent societal harm

Top Stories

Anthropic expands its UK operations with an 800-employee office in London and launches the cybersecurity-focused Mythos model for financial institutions.

AI Generative

OpenAI debuts the GPT-5.3 Instant Mini and a $100 Pro plan amid a 300% spike in subscription cancellations and user protests over military ties.

AI Cybersecurity

Anthropic's Claude Mythos Preview can autonomously exploit software vulnerabilities, alarming leaders like U.S. Treasury Secretary Scott Bessent and raising cyber risk concerns.

© 2025 AIPressa · Part of Buzzora Media · All rights reserved. This website provides general news and educational content for informational purposes only. While we strive for accuracy, we do not guarantee the completeness or reliability of the information presented. The content should not be considered professional advice of any kind. Readers are encouraged to verify facts and consult appropriate experts when needed. We are not responsible for any loss or inconvenience resulting from the use of information on this site. Some images used on this website are generated with artificial intelligence and are illustrative in nature. They may not accurately represent the products, people, or events described in the articles.