Last week marked a significant turning point for Anthropic, the artificial intelligence company that has drawn a line in the sand over its ethical principles. The firm refused to allow the U.S. government to use its products for surveillance of the American public or to direct autonomous weapons without human oversight. In response, the Department of Defense (DOD) terminated a $200 million contract with the company. The fallout included strong criticism from former President Donald Trump, who labeled Anthropic as “leftwing nut jobs” on Truth Social and urged federal agencies to cease using its products. Further escalating tensions, Defense Secretary Pete Hegseth designated Anthropic a “Supply-Chain Risk to National Security,” while its main competitor, OpenAI, quickly secured its own contract with the Pentagon.
Despite these challenges, early indications suggest that Anthropic’s principled stance could yield unexpected benefits. Following its confrontation with the DOD, the company’s AI model, Claude, gained significant traction, reportedly becoming one of the top 10 most-downloaded free apps in the United States according to Apple’s charts. The day after Hegseth’s announcement, Claude reached the number one spot, maintaining that position as of this writing, with downloads exceeding 1 million daily. Anthropic’s chief product officer confirmed that the company has surpassed its own sign-up records every day since the previous week across all available countries.
Downloads of Claude are not the only metric showing success; users are actively abandoning OpenAI’s flagship app, ChatGPT. On February 28, uninstalls of ChatGPT surged by 295 percent as details of OpenAI’s contract with the Pentagon emerged. Additionally, one-star reviews for ChatGPT spiked nearly 800 percent, while five-star reviews halved. This shift illustrates a growing dissatisfaction with OpenAI amidst concerns over its partnership with the government.
Anthropic’s principled stance has also garnered respect within the tech community. Letters of support for the firm are circulating among employees at competing companies, with one letter reportedly gathering 850 signatures by Monday. Many of these workers are demanding their employers align with Anthropic’s ethical boundaries, with some even threatening to resign if their companies do not comply.
The company has found allies beyond Silicon Valley as well. Before the dispute with the DOD, former Republican Representative Denver Riggleman, now leading a cybersecurity firm, was examining various AI partners. Following Anthropic’s ethical stand, Riggleman decided to collaborate exclusively with the company for future projects, stating that Anthropic “had its nonnegotiables,” and “we have ours.”
Riggleman believes that Hegseth’s designation of Anthropic as a supply-chain risk is likely to be overturned in court. Historically, the U.S. government has applied this label to foreign companies, such as Huawei, and never to an American firm. He noted that the label appears to be a retaliatory move against a company that declined contract terms, which he believes lacks a solid legal foundation. “To say it rests on shaky legal ground,” Riggleman remarked, “would be generous.”
Reflecting on the government’s role in regulating technology, Riggleman expressed concern over the current state of oversight, stating, “These days, the government is no longer creating those safeguards—it’s destroying them.” He emphasized that society may not yet fully grasp what it means to have private firms controlling vast amounts of information about citizens.
The DOD has defended its contract with Anthropic, claiming it included adequate safeguards that limited AI usage to “all lawful purposes.” However, Anthropic contends that this language could be reinterpreted through executive orders or changes in statute, raising concerns that they would be complicit in actions leading to potential harm. Anthropic CEO Dario Amodei articulated this concern, stating, “We don’t want to sell something that could get our own people killed, or that could get innocent people killed.”
In contrast, OpenAI has asserted that its subsequent contract with the DOD is more secure than Anthropic’s. However, it too retains the “all lawful purposes” language, making its prohibitions reliant on the DOD’s adherence to legal norms. OpenAI CEO Sam Altman conceded that the deal was “definitely rushed” and that the optics surrounding it were not favorable. On Monday, OpenAI announced it had added restrictions regarding surveillance to its contract, although critics remain skeptical about their enforceability.
The developments of the past week evoke a broader reflection on the ethical responsibilities of technology companies. Anthropic, by standing firm against government pressures, may have secured something far more valuable than a contract: a foundational trust that could reshape its standing in an industry increasingly wary of compromising ethical principles.
See also
OpenAI Reveals GPT-5.2 with 400K Tokens and Enhanced Multimodal Capabilities
Germany”s National Team Prepares for World Cup Qualifiers with Disco Atmosphere
95% of AI Projects Fail in Companies According to MIT
AI in Food & Beverages Market to Surge from $11.08B to $263.80B by 2032
Satya Nadella Supports OpenAI’s $100B Revenue Goal, Highlights AI Funding Needs

















































