In a significant shift, the Pentagon abruptly terminated its contract with **Anthropic** on Friday, citing the company’s refusal to loosen safety protocols regarding the deployment of its artificial intelligence in surveillance or fully autonomous weapons systems. Hours later, the Department of Defense (DoD) signed a deal with **OpenAI**, just as U.S. military strikes commenced in Tehran. This rapid sequence of events has sparked considerable public backlash, with many accusing OpenAI of capitulating to the demands of the Trump administration.
Following the signing, the **OpenAI** technology rose to prominence on the App Store, while numerous users called for a boycott against the company. In response to the criticism, OpenAI asserted that its technologies would not be employed for mass domestic surveillance or direct control of autonomous weaponry. Specific details regarding the contract and the measures to enforce these limitations were not disclosed, although OpenAI executives provided some insights during a forum hosted on X over the weekend.
Katrina Mulligan, OpenAI’s head of national security partnerships, explained that the contract permits the Pentagon to utilize its technology for “all lawful purposes, consistent with applicable law, operational requirements, and well-established safety and oversight protocols.” Mulligan clarified that “applicable law” refers to the legal framework in place at the time of the contract’s signing, emphasizing that the agreement applies exclusively to defense operations and will not extend to domestic law enforcement.
In a candid acknowledgment, OpenAI CEO Sam Altman admitted that the deal was “rushed” and expressed concerns about the negative optics surrounding it. “I have accepted that the U.S. military is going to do some amount of surveillance on foreigners, and I know foreign governments try to do it to us, but I still don’t like it,” Altman stated on X. He further articulated his belief that the decision ultimately lies within the democratic process, asserting that it is not solely up to him to dictate the terms of such contracts.
Despite the reassurances from OpenAI leadership, skepticism remains about the government’s adherence to legal constraints. Mulligan noted that U.S. law already mitigates some of the worst potential outcomes, while Altman remarked on the government’s commitment to following law and policy. However, historical instances of mass surveillance scandals raise concerns about the government’s ability to navigate legal confines when necessary. The controversy surrounding military actions, such as the disputed boat strikes in the Caribbean last year, further illustrates potential violations of international norms.
OpenAI executives contend that their deal differs from the one offered to Anthropic, with Altman suggesting that Anthropic sought greater operational control than OpenAI was willing to accept. He expressed apprehension about the ethical implications of allowing a private company to define ethical boundaries in critical areas. Instead, OpenAI plans to deploy engineers to monitor the Pentagon’s use of its technology, with Mulligan asserting that technical controls often provide more reliability than contractual clauses.
However, an anonymous source revealed to **The Verge** that the effectiveness of these safeguards might be limited. Additionally, **Sarah Shoker**, a former geopolitics researcher at OpenAI, indicated that there is a lack of consensus in the defense industry regarding what constitutes adequate human oversight in autonomous weapons, a point of contention that may have distinguished Anthropic’s position from that of OpenAI.
While OpenAI executives actively defended their decision on social media, dissent is evident within the organization. Before the announcement about the Pentagon deal, 96 employees signed an open letter urging company leadership to reject the military’s demands for the use of their models in domestic mass surveillance and the development of weapons without human oversight. Many OpenAI staff members, including senior leaders, have also called for the Pentagon to rescind its supply chain risk designation for Anthropic.
Expressing personal discontent, OpenAI research scientist **Aidan McLaughlin** stated on X that he believed the deal was not worth it and mentioned an “overwhelming” internal discussion regarding the controversial decision. As the situation evolves, the implications of this contract between OpenAI and the Pentagon raise critical questions about the ethical boundaries of AI technology in military applications and the responsibility of tech companies in navigating these complex issues.
See also
Germany”s National Team Prepares for World Cup Qualifiers with Disco Atmosphere
95% of AI Projects Fail in Companies According to MIT
AI in Food & Beverages Market to Surge from $11.08B to $263.80B by 2032
Satya Nadella Supports OpenAI’s $100B Revenue Goal, Highlights AI Funding Needs
Wall Street Recovers from Early Loss as Nvidia Surges 1.8% Amid Market Volatility


















































