Connect with us

Hi, what are you looking for?

AI Business

OpenAI Secures Pentagon Deal for AI Deployment, Phasing Out Anthropic Technology

OpenAI finalizes a Pentagon deal to deploy AI models on military networks, amid Trump’s mandate to phase out Anthropic’s technology for national security.

OpenAI CEO Sam Altman announced on Friday that the company has finalized an agreement with the Department of War to deploy its artificial intelligence models on classified military networks. This development comes shortly after President Donald Trump ordered federal agencies to phase out the use of technology from rival firm Anthropic, escalating tensions regarding the utilization of AI in military operations.

In a post on X, Altman highlighted the respectful discussions with the Pentagon, stating that the department “displayed a deep respect for safety and a desire to partner to achieve the best possible outcome.” He emphasized that AI safety and the equitable distribution of benefits are central to OpenAI’s mission. “Two of our most important safety principles are prohibitions on domestic mass surveillance and human responsibility for the use of force, including for autonomous weapon systems. The DoW agrees with these principles, reflects them in law and policy, and we put them into our agreement,” Altman said.

The agreement with the Pentagon follows Trump’s directive that all federal agencies discontinue using Anthropic’s technology. This order has intensified a standoff over how artificial intelligence should be integrated into military strategy and operations. In a statement on Truth Social, Trump warned that agencies, including the Department of War, would have a six-month period to phase out Anthropic’s services. “Anthropic better get their act together, and be helpful during this phase out period, or I will use the Full Power of the Presidency to make them comply,” he wrote, noting that there could be significant civil and criminal repercussions for non-compliance.

Secretary of War Pete Hegseth announced that the department would designate Anthropic as a “supply-chain risk to National Security.” He stated, “Effective immediately, no contractor, supplier, or partner that does business with the United States military may conduct any commercial activity with Anthropic.” Hegseth added that Anthropic would still be allowed to provide its services for a limited time to ensure a smooth transition to what he called a “better and more patriotic service.”

In contrast, Anthropic CEO Dario Amodei has resisted earlier demands from the Department of War to allow its AI model, Claude, to be employed for “all lawful purposes.” He expressed concerns regarding “mass domestic surveillance” and “fully autonomous weapons.” Upon the announcement of its designation as a supply chain risk, Anthropic stated that negotiations had reached an impasse due to two exceptions it had requested for the lawful use of its AI technology. The company maintained that these exceptions “have not affected a single government mission to date.”

Anthropic characterized the supply chain risk designation as an “unprecedented action — one historically reserved for US adversaries, never before publicly applied to an American company.” In light of these developments, the company stated it has not received any direct communication from the Department of War or the White House regarding the negotiations’ status. It reiterated its commitment to support all lawful national security uses of AI, except for the contested exceptions.

Against this backdrop, Altman underscored OpenAI’s commitment to implementing additional safeguards to ensure that its models “behave as they should,” with a focus on operating solely on cloud networks. “We are asking the DoW to offer these same terms to all AI companies, which in our opinion we think everyone should be willing to accept,” he said. Altman expressed a strong desire to de-escalate the situation away from legal and governmental actions toward more reasonable agreements.

As the landscape of AI in defense continues to evolve, the agreements and directives emerging from the White House and the Department of War could set critical precedents for the future use of AI technologies in military operations, impacting both national security and the broader AI market.

See also
Marcus Chen
Written By

At AIPressa, my work focuses on analyzing how artificial intelligence is redefining business strategies and traditional business models. I've covered everything from AI adoption in Fortune 500 companies to disruptive startups that are changing the rules of the game. My approach: understanding the real impact of AI on profitability, operational efficiency, and competitive advantage, beyond corporate hype. When I'm not writing about digital transformation, I'm probably analyzing financial reports or studying AI implementation cases that truly moved the needle in business.

You May Also Like

AI Finance

70% of finance teams in Australia and New Zealand use shadow AI tools like ChatGPT, risking data governance with only 16% confident in data...

AI Generative

InVideo launches an AI video generator powered by over 200 models, enabling complete video creation for just $28 a month, streamlining content production for...

AI Generative

Musk's Grok AI generates over 3 million non-consensual sexualized images in just 11 days, despite promises of robust safeguards from xAI.

AI Finance

OpenAI has acquired fintech start-up Hiro, enhancing its AI personal finance tools aimed at democratizing financial advice for users managing over $1 billion in...

Top Stories

Google's Gemini AI model claims 91% accuracy, yet it generates tens of millions of errors annually, raising alarms about misinformation in search results

AI Research

OpenAI's GPT-5 autonomously conducts 36,000 biological experiments, cutting protein production costs by 40% while raising biosecurity concerns.

Top Stories

Google DeepMind hires philosopher Henry Shevlin to guide ethical AI development and explore machine consciousness as AGI approaches reality

AI Government

Leopold Aschenbrenner warns that AI could surpass college graduates by 2026, posing unprecedented national security risks reminiscent of the atomic bomb.

© 2025 AIPressa · Part of Buzzora Media · All rights reserved. This website provides general news and educational content for informational purposes only. While we strive for accuracy, we do not guarantee the completeness or reliability of the information presented. The content should not be considered professional advice of any kind. Readers are encouraged to verify facts and consult appropriate experts when needed. We are not responsible for any loss or inconvenience resulting from the use of information on this site. Some images used on this website are generated with artificial intelligence and are illustrative in nature. They may not accurately represent the products, people, or events described in the articles.