OpenAI CEO Sam Altman announced plans to amend the company’s agreement with the Department of Defense to explicitly prohibit the use of its AI systems for mass surveillance against U.S. citizens. In a memo published on X, Altman outlined the company’s commitment to ensuring that its technology will not be employed for domestic surveillance, in line with U.S. laws such as the Fourth Amendment and the National Security Act of 1947.
The revised agreement will include clear language stating, “Consistent with applicable laws, including the Fourth Amendment…the AI system shall not be intentionally used for domestic surveillance of U.S. persons and nationals.” This language aims to prevent deliberate tracking or monitoring of U.S. citizens, including through the acquisition of personal data.
In the memo, Altman emphasized that the Department of Defense confirmed its services would not be utilized by intelligence agencies, such as the NSA, without modifications to the agreement. He further stated that he would prefer to face legal consequences rather than comply with any unconstitutional directives.
Altman also acknowledged that the company had rushed to finalize the agreement, admitting that the complexities surrounding the issues required clearer communication. He noted that OpenAI was attempting to “de-escalate things” but recognized that the swift announcement appeared opportunistic. This decision came in the wake of President Trump‘s directive, which required all U.S. government agencies to cease using services from Claude and other products of its competitor, Anthropic.
The Defense Department had been applying pressure on Anthropic to remove restrictions on its AI’s use, suggesting it could be employed for all “lawful” purposes, including mass surveillance and the development of fully autonomous weapons. Anthropic resisted these demands, asserting that “no amount of intimidation or punishment” would alter its stance on these issues. This confrontation led to Trump’s order against Anthropic, which was also facing designation as a “supply chain risk,” a classification typically reserved for companies perceived to have connections with the Chinese government.
In discussions with U.S. officials, Altman expressed that Anthropic should not receive the supply chain risk designation and hoped the Department of Defense would extend a similar agreement to them as OpenAI had secured. However, during an AMA session on X, he clarified that he was unaware of the specifics of Anthropic’s contractual terms and how they might differ from OpenAI’s.
Following OpenAI’s announcement, Anthropic rose to the top of the App Store’s list of free apps, surpassing both ChatGPT and Google Gemini. The company seized on this momentum by introducing a memory import tool designed to facilitate the transition from other chatbots. In contrast, uninstalls of ChatGPT surged by 295 percent day-over-day, according to data from Sensor Tower.
The evolving landscape surrounding AI and its applications continues to raise questions about ethical use, particularly regarding surveillance and military applications. As OpenAI and Anthropic navigate the complexities of government contracts and public perception, the discourse around responsible AI deployment will likely grow increasingly crucial in the tech industry and beyond.
See also
Tesseract Launches Site Manager and PRISM Vision Badge for Job Site Clarity
Affordable Android Smartwatches That Offer Great Value and Features
Russia”s AIDOL Robot Stumbles During Debut in Moscow
AI Technology Revolutionizes Meat Processing at Cargill Slaughterhouse
Seagate Unveils Exos 4U100: 3.2PB AI-Ready Storage with Advanced HAMR Tech




















































