Connect with us

Hi, what are you looking for?

AI Regulation

OpenAI Signs Controversial Deal with Pentagon, Raises Concerns Over Surveillance Use

OpenAI’s new contract with the Pentagon raises alarms over potential surveillance use of its technology, igniting protests and calls for ethical accountability.

Outside OpenAI’s headquarters, a small group of protesters gathered on Monday, using colorful chalk to convey their messages on the sidewalk. Their statements included “Stand for liberty” and “Please no legal mass surveillance,” reflecting concerns over a recent contract OpenAI signed with the Department of Defense (DOD). The deal, which comes after the Pentagon’s fallout with Anthropic, will see OpenAI’s technology utilized in classified military settings, raising alarms among legal experts about potential governmental overreach.

Niki Dupuis, an AI startup founder and one of the protesters, expressed her desire for OpenAI to take a strong ethical stance. “I would just really like to see OpenAI do the right thing and stand up for something, anything,” Dupuis said. In an internal memo obtained by AIPressa, OpenAI CEO Sam Altman mentioned the need for “red lines” to prevent the Pentagon from deploying OpenAI’s products for mass surveillance or autonomous lethal weapons—limits that Anthropic had attempted to set, which led to its exclusion from military contracts.

However, experts scrutinizing the publicly available sections of the contract suggest these boundaries are vague. Many indicated that the Pentagon might still leverage OpenAI’s technology—including models that power ChatGPT—for mass surveillance of U.S. citizens. Reports allege that Anthropic’s AI products have already been utilized for military targeting, though the company resisted their use in fully autonomous weapons.

A spokesperson for OpenAI reiterated that the DOD has agreed not to employ its AI systems for domestic surveillance, yet the specifics of the contract remain largely undisclosed. Legal analysts, including Charlie Bullock from the Institute for Law & AI, highlighted the precarious position the public finds itself in: “The public is in an awkward position where we have to choose between trusting OpenAI or not,” he said. In contrast, Brad Carson, former undersecretary of the Army, criticized OpenAI’s approach, positing that the company appears “okay with using ChatGPT for what ordinary people think of as mass surveillance.”

The past week has been tumultuous for OpenAI, marked by several announcements regarding its engagement with the DOD. On Friday, shortly after news broke of Anthropic’s severed ties with the Pentagon, Altman revealed the agreement, emphasizing that it includes prohibitions on domestic mass surveillance and insists on accountability for the use of force. Skepticism arose, however, as the language used seemed to allow for potential loopholes that could enable surveillance under certain circumstances.

OpenAI later attempted to clarify its stance through a blog post, asserting that its red lines against domestic surveillance and autonomous weaponry were firm. The agreement was framed as an effort to “de-escalate things” between the Pentagon and other AI firms, with hopes that similar terms would be offered to Anthropic. However, legal experts noted that the segments of the contract shared by OpenAI suggest compliance with existing laws governing U.S. intelligence activities, including the Foreign Intelligence and Surveillance Act (FISA), which allows for extensive data collection.

While OpenAI maintains that it has implemented protective measures against misuse of its technology, concerns linger regarding the potential for interpretation that could enable surveillance deemed “incidental.” Carson described the modifications to the contract as “vaporous things that seem good”—essentially, cosmetic changes lacking substantive guarantees.

OpenAI announced its intention to employ a technical “safety stack” to monitor the usage of its models in collaboration with DOD, which it claims will help verify compliance with the terms of the agreement. Nevertheless, the defining question remains: what recourse does OpenAI have if it believes the Pentagon has violated their agreement? OpenAI stated that it could terminate the contract if violations occur, although the specifics of this process have not been disclosed.

The situation has underscored the precarious nature of government contracts for tech firms, particularly in the context of national security. Altman expressed concern over the DOD’s blacklisting of Anthropic, noting it sets a “scary precedent.” The ongoing discourse emphasizes a critical takeaway for AI companies aspiring to partner with the DOD: comply with government demands or jeopardize their position. OpenAI has made its choice, but the implications for civil liberties and ethical AI use continue to provoke public debate.

See also
Staff
Written By

The AiPressa Staff team brings you comprehensive coverage of the artificial intelligence industry, including breaking news, research developments, business trends, and policy updates. Our mission is to keep you informed about the rapidly evolving world of AI technology.

You May Also Like

Top Stories

Trump administration enforces strict AI contract rules, barring Anthropic from military projects and mandating irrevocable licenses for lawful use of models.

Top Stories

Microsoft defends Anthropic's Claude AI amid a Pentagon blacklist, ensuring integration into enterprise tools for 29% of the market, potentially affecting $26B revenue by...

AI Technology

NUS Computing expands its AI curriculum with new degree programs and partnerships with OpenAI to enhance student access to cutting-edge AI technologies.

AI Government

Microsoft continues to support Anthropic's Claude models amid its Pentagon security risk designation, ensuring Azure clients retain access to vital AI technology.

AI Finance

Oracle plans to cut thousands of jobs amid a $50 billion expansion of AI data centers, anticipating reduced demand due to AI advancements.

Top Stories

Microsoft confirms Anthropic's AI products will remain available despite security risks, prioritizing enhanced security measures to safeguard technologies.

AI Generative

Luma unveils Luma Agents, an AI platform utilizing Unified Intelligence to autonomously streamline multimodal creative workflows, targeting competition with OpenAI and Anthropic.

Top Stories

OPM halts use of Anthropic's Claude amid safety concerns, replacing it with Grok and Codex, expected to debut in Q1 2026.

© 2025 AIPressa · Part of Buzzora Media · All rights reserved. This website provides general news and educational content for informational purposes only. While we strive for accuracy, we do not guarantee the completeness or reliability of the information presented. The content should not be considered professional advice of any kind. Readers are encouraged to verify facts and consult appropriate experts when needed. We are not responsible for any loss or inconvenience resulting from the use of information on this site. Some images used on this website are generated with artificial intelligence and are illustrative in nature. They may not accurately represent the products, people, or events described in the articles.