Connect with us

Hi, what are you looking for?

Top Stories

OpenAI Secures Pentagon Deal Amid Employee Backlash Over AI Safety Concerns

OpenAI secures a controversial Pentagon contract for AI despite 96 employee protests over ethics and safety in military applications.

In a significant shift, the Pentagon abruptly terminated its contract with **Anthropic** on Friday, citing the company’s refusal to loosen safety protocols regarding the deployment of its artificial intelligence in surveillance or fully autonomous weapons systems. Hours later, the Department of Defense (DoD) signed a deal with **OpenAI**, just as U.S. military strikes commenced in Tehran. This rapid sequence of events has sparked considerable public backlash, with many accusing OpenAI of capitulating to the demands of the Trump administration.

Following the signing, the **OpenAI** technology rose to prominence on the App Store, while numerous users called for a boycott against the company. In response to the criticism, OpenAI asserted that its technologies would not be employed for mass domestic surveillance or direct control of autonomous weaponry. Specific details regarding the contract and the measures to enforce these limitations were not disclosed, although OpenAI executives provided some insights during a forum hosted on X over the weekend.

Katrina Mulligan, OpenAI’s head of national security partnerships, explained that the contract permits the Pentagon to utilize its technology for “all lawful purposes, consistent with applicable law, operational requirements, and well-established safety and oversight protocols.” Mulligan clarified that “applicable law” refers to the legal framework in place at the time of the contract’s signing, emphasizing that the agreement applies exclusively to defense operations and will not extend to domestic law enforcement.

In a candid acknowledgment, OpenAI CEO Sam Altman admitted that the deal was “rushed” and expressed concerns about the negative optics surrounding it. “I have accepted that the U.S. military is going to do some amount of surveillance on foreigners, and I know foreign governments try to do it to us, but I still don’t like it,” Altman stated on X. He further articulated his belief that the decision ultimately lies within the democratic process, asserting that it is not solely up to him to dictate the terms of such contracts.

Despite the reassurances from OpenAI leadership, skepticism remains about the government’s adherence to legal constraints. Mulligan noted that U.S. law already mitigates some of the worst potential outcomes, while Altman remarked on the government’s commitment to following law and policy. However, historical instances of mass surveillance scandals raise concerns about the government’s ability to navigate legal confines when necessary. The controversy surrounding military actions, such as the disputed boat strikes in the Caribbean last year, further illustrates potential violations of international norms.

OpenAI executives contend that their deal differs from the one offered to Anthropic, with Altman suggesting that Anthropic sought greater operational control than OpenAI was willing to accept. He expressed apprehension about the ethical implications of allowing a private company to define ethical boundaries in critical areas. Instead, OpenAI plans to deploy engineers to monitor the Pentagon’s use of its technology, with Mulligan asserting that technical controls often provide more reliability than contractual clauses.

However, an anonymous source revealed to **The Verge** that the effectiveness of these safeguards might be limited. Additionally, **Sarah Shoker**, a former geopolitics researcher at OpenAI, indicated that there is a lack of consensus in the defense industry regarding what constitutes adequate human oversight in autonomous weapons, a point of contention that may have distinguished Anthropic’s position from that of OpenAI.

While OpenAI executives actively defended their decision on social media, dissent is evident within the organization. Before the announcement about the Pentagon deal, 96 employees signed an open letter urging company leadership to reject the military’s demands for the use of their models in domestic mass surveillance and the development of weapons without human oversight. Many OpenAI staff members, including senior leaders, have also called for the Pentagon to rescind its supply chain risk designation for Anthropic.

Expressing personal discontent, OpenAI research scientist **Aidan McLaughlin** stated on X that he believed the deal was not worth it and mentioned an “overwhelming” internal discussion regarding the controversial decision. As the situation evolves, the implications of this contract between OpenAI and the Pentagon raise critical questions about the ethical boundaries of AI technology in military applications and the responsibility of tech companies in navigating these complex issues.

See also
Staff
Written By

The AiPressa Staff team brings you comprehensive coverage of the artificial intelligence industry, including breaking news, research developments, business trends, and policy updates. Our mission is to keep you informed about the rapidly evolving world of AI technology.

You May Also Like

AI Generative

Canada's Heritage Committee urges clear labeling for AI-generated content to protect creators, as proposed copyright law changes face rising industry concerns.

Top Stories

Anthropic launches a redesigned Claude Code app, integrating an advanced terminal and in-app editing to streamline coding workflows for developers on macOS and Windows.

Top Stories

OpenAI enhances Codex with groundbreaking background operation and in-app browser features to compete with Anthropic's rising Claude Code for enterprise users.

AI Education

Anthropic unveils Claude Opus 4.7, enhancing coding and multimodal vision capabilities, now processing images at over three times the resolution of earlier models.

AI Generative

ETH Zurich study reveals large language models can deanonymize users with up to 67% recall, raising alarms over online privacy effectiveness.

AI Regulation

OpenAI's David Lehane condemns 'doomer' narratives following a Molotov cocktail attack on CEO Sam Altman, urging for responsible AI discourse to prevent societal harm

Top Stories

Anthropic expands its UK operations with an 800-employee office in London and launches the cybersecurity-focused Mythos model for financial institutions.

AI Generative

OpenAI debuts the GPT-5.3 Instant Mini and a $100 Pro plan amid a 300% spike in subscription cancellations and user protests over military ties.

© 2025 AIPressa · Part of Buzzora Media · All rights reserved. This website provides general news and educational content for informational purposes only. While we strive for accuracy, we do not guarantee the completeness or reliability of the information presented. The content should not be considered professional advice of any kind. Readers are encouraged to verify facts and consult appropriate experts when needed. We are not responsible for any loss or inconvenience resulting from the use of information on this site. Some images used on this website are generated with artificial intelligence and are illustrative in nature. They may not accurately represent the products, people, or events described in the articles.