OpenAI CEO Sam Altman acknowledged that the company rushed its recent agreement with the U.S. Department of War (DOW), describing the move as “opportunistic and sloppy.” In an internal memo shared on X, Altman indicated that OpenAI is now amending the contract to supply the military with AI technology, a decision that has not alleviated concerns among its civilian user base.
“[W]e shouldn’t have rushed to get this out on Friday,” Altman wrote in a post on Monday. “The issues are super complex, and demand clear communication. We were genuinely trying to de-escalate things and avoid a much worse outcome, but I think it just looked opportunistic and sloppy.”
The partnership, announced late last week, followed President Donald Trump’s directive for federal agencies to cease using competitor Anthropic. Anthropic’s CEO, Dario Amodei, stated that the severance occurred because the DOW insisted on the removal of safeguards against the use of AI for mass domestic surveillance and fully autonomous weapons. Instead, the DOW sought to utilize Anthropic’s AI for “any lawful use.”
OpenAI’s swift agreement with the DOW faced immediate backlash from users concerned about the implications for privacy and ethics. Although OpenAI claimed that its contract includes more safeguards than Anthropic’s original agreement, critics noted that it still permitted mass surveillance and AI-controlled weaponry as long as such uses comply with existing laws, outlining conditions under which they would be allowed.
In response to the backlash, OpenAI stated that it is working with the DOW to introduce new language in the contract that directly addresses concerns regarding domestic surveillance use of its technology. “Throughout our discussions, the Department [of War] made clear it shares our commitment to ensuring our tools will not be used for domestic surveillance,” OpenAI remarked in an update issued Monday.
Nevertheless, the amendments OpenAI has proposed continue to hinge on legality as the primary constraint against mass surveillance, leaving open the potential for such practices if legislation changes. Additionally, the revisions did not tackle the issue of autonomous weapons.
“Consistent with applicable laws… the AI system shall not be intentionally used for domestic surveillance of U.S. persons and nationals,” the newly introduced sections state. However, social media users expressed skepticism, arguing that the specific prohibition on “deliberate” surveillance creates significant loopholes. Political researcher Tyson Brody remarked, “Hard not to read as admitting to an AI dragnet… so Americans will be swept up in this data, but the government can claim ‘incidental collection’ and thus legal.”
Another user, @Andy_Bloch, commented, “‘Not intentionally used’ isn’t a real safeguard in an autonomous AI system.” Concerns persist that AI systems might inadvertently engage in surveillance due to their training data and subsequent applications.
Altman has previously indicated that OpenAI would confine the use of its AI tools to legal parameters, eschewing ethical considerations. During a Q&A shortly after the DOW deal was made public, he expressed a preference for aligning with government directives rather than independently addressing ethical issues. This stance has drawn criticism, with Altman addressing the matter again in his memo by emphasizing a commitment to “democratic processes.”
“It should be the government making the key decisions about society,” Altman wrote. “We want to have a voice, and a seat at the table where we can share our expertise and to fight for principles of liberty. But we are clear on how the system works…” He added that the DOW’s intelligence agencies, including the National Security Agency (NSA), would not utilize OpenAI’s technology without a contract amendment, although it seems unlikely that OpenAI would reject legal requests for such changes irrespective of ethical concerns.
The controversy surrounding the DOW agreement has led many OpenAI customers to cancel their ChatGPT subscriptions, with reports indicating a 295 percent increase in uninstalls following the announcement. Meanwhile, Anthropic’s AI chatbot, Claude, has surpassed ChatGPT as the most downloaded free app in the U.S. Apple App Store.
The implications of OpenAI’s military partnership highlight ongoing tensions between technological advancement and ethical responsibility, as the discourse surrounding AI governance continues to evolve in the face of societal scrutiny.
See also
Germany”s National Team Prepares for World Cup Qualifiers with Disco Atmosphere
95% of AI Projects Fail in Companies According to MIT
AI in Food & Beverages Market to Surge from $11.08B to $263.80B by 2032
Satya Nadella Supports OpenAI’s $100B Revenue Goal, Highlights AI Funding Needs
Wall Street Recovers from Early Loss as Nvidia Surges 1.8% Amid Market Volatility




















































