Outside OpenAI’s headquarters, a small group of protesters gathered on Monday, using colorful chalk to convey their messages on the sidewalk. Their statements included “Stand for liberty” and “Please no legal mass surveillance,” reflecting concerns over a recent contract OpenAI signed with the Department of Defense (DOD). The deal, which comes after the Pentagon’s fallout with Anthropic, will see OpenAI’s technology utilized in classified military settings, raising alarms among legal experts about potential governmental overreach.
Niki Dupuis, an AI startup founder and one of the protesters, expressed her desire for OpenAI to take a strong ethical stance. “I would just really like to see OpenAI do the right thing and stand up for something, anything,” Dupuis said. In an internal memo obtained by AIPressa, OpenAI CEO Sam Altman mentioned the need for “red lines” to prevent the Pentagon from deploying OpenAI’s products for mass surveillance or autonomous lethal weapons—limits that Anthropic had attempted to set, which led to its exclusion from military contracts.
However, experts scrutinizing the publicly available sections of the contract suggest these boundaries are vague. Many indicated that the Pentagon might still leverage OpenAI’s technology—including models that power ChatGPT—for mass surveillance of U.S. citizens. Reports allege that Anthropic’s AI products have already been utilized for military targeting, though the company resisted their use in fully autonomous weapons.
A spokesperson for OpenAI reiterated that the DOD has agreed not to employ its AI systems for domestic surveillance, yet the specifics of the contract remain largely undisclosed. Legal analysts, including Charlie Bullock from the Institute for Law & AI, highlighted the precarious position the public finds itself in: “The public is in an awkward position where we have to choose between trusting OpenAI or not,” he said. In contrast, Brad Carson, former undersecretary of the Army, criticized OpenAI’s approach, positing that the company appears “okay with using ChatGPT for what ordinary people think of as mass surveillance.”
The past week has been tumultuous for OpenAI, marked by several announcements regarding its engagement with the DOD. On Friday, shortly after news broke of Anthropic’s severed ties with the Pentagon, Altman revealed the agreement, emphasizing that it includes prohibitions on domestic mass surveillance and insists on accountability for the use of force. Skepticism arose, however, as the language used seemed to allow for potential loopholes that could enable surveillance under certain circumstances.
OpenAI later attempted to clarify its stance through a blog post, asserting that its red lines against domestic surveillance and autonomous weaponry were firm. The agreement was framed as an effort to “de-escalate things” between the Pentagon and other AI firms, with hopes that similar terms would be offered to Anthropic. However, legal experts noted that the segments of the contract shared by OpenAI suggest compliance with existing laws governing U.S. intelligence activities, including the Foreign Intelligence and Surveillance Act (FISA), which allows for extensive data collection.
While OpenAI maintains that it has implemented protective measures against misuse of its technology, concerns linger regarding the potential for interpretation that could enable surveillance deemed “incidental.” Carson described the modifications to the contract as “vaporous things that seem good”—essentially, cosmetic changes lacking substantive guarantees.
OpenAI announced its intention to employ a technical “safety stack” to monitor the usage of its models in collaboration with DOD, which it claims will help verify compliance with the terms of the agreement. Nevertheless, the defining question remains: what recourse does OpenAI have if it believes the Pentagon has violated their agreement? OpenAI stated that it could terminate the contract if violations occur, although the specifics of this process have not been disclosed.
The situation has underscored the precarious nature of government contracts for tech firms, particularly in the context of national security. Altman expressed concern over the DOD’s blacklisting of Anthropic, noting it sets a “scary precedent.” The ongoing discourse emphasizes a critical takeaway for AI companies aspiring to partner with the DOD: comply with government demands or jeopardize their position. OpenAI has made its choice, but the implications for civil liberties and ethical AI use continue to provoke public debate.
See also
OpenAI’s Rogue AI Safeguards: Decoding the 2025 Safety Revolution
US AI Developments in 2025 Set Stage for 2026 Compliance Challenges and Strategies
Trump Drafts Executive Order to Block State AI Regulations, Centralizing Authority Under Federal Control
California Court Rules AI Misuse Heightens Lawyer’s Responsibilities in Noland Case
Policymakers Urged to Establish Comprehensive Regulations for AI in Mental Health



















































