Connect with us

Hi, what are you looking for?

Top Stories

Anthropic Gains Trust Amid $200M DOD Contract Loss, Surpassing 1M Daily App Downloads

Anthropic, after losing a $200M DOD contract, sees a surge in success with over 1M daily downloads of its Claude app, emphasizing ethical AI principles.

Last week marked a significant turning point for Anthropic, the artificial intelligence company that has drawn a line in the sand over its ethical principles. The firm refused to allow the U.S. government to use its products for surveillance of the American public or to direct autonomous weapons without human oversight. In response, the Department of Defense (DOD) terminated a $200 million contract with the company. The fallout included strong criticism from former President Donald Trump, who labeled Anthropic as “leftwing nut jobs” on Truth Social and urged federal agencies to cease using its products. Further escalating tensions, Defense Secretary Pete Hegseth designated Anthropic a “Supply-Chain Risk to National Security,” while its main competitor, OpenAI, quickly secured its own contract with the Pentagon.

Despite these challenges, early indications suggest that Anthropic’s principled stance could yield unexpected benefits. Following its confrontation with the DOD, the company’s AI model, Claude, gained significant traction, reportedly becoming one of the top 10 most-downloaded free apps in the United States according to Apple’s charts. The day after Hegseth’s announcement, Claude reached the number one spot, maintaining that position as of this writing, with downloads exceeding 1 million daily. Anthropic’s chief product officer confirmed that the company has surpassed its own sign-up records every day since the previous week across all available countries.

Downloads of Claude are not the only metric showing success; users are actively abandoning OpenAI’s flagship app, ChatGPT. On February 28, uninstalls of ChatGPT surged by 295 percent as details of OpenAI’s contract with the Pentagon emerged. Additionally, one-star reviews for ChatGPT spiked nearly 800 percent, while five-star reviews halved. This shift illustrates a growing dissatisfaction with OpenAI amidst concerns over its partnership with the government.

Anthropic’s principled stance has also garnered respect within the tech community. Letters of support for the firm are circulating among employees at competing companies, with one letter reportedly gathering 850 signatures by Monday. Many of these workers are demanding their employers align with Anthropic’s ethical boundaries, with some even threatening to resign if their companies do not comply.

The company has found allies beyond Silicon Valley as well. Before the dispute with the DOD, former Republican Representative Denver Riggleman, now leading a cybersecurity firm, was examining various AI partners. Following Anthropic’s ethical stand, Riggleman decided to collaborate exclusively with the company for future projects, stating that Anthropic “had its nonnegotiables,” and “we have ours.”

Riggleman believes that Hegseth’s designation of Anthropic as a supply-chain risk is likely to be overturned in court. Historically, the U.S. government has applied this label to foreign companies, such as Huawei, and never to an American firm. He noted that the label appears to be a retaliatory move against a company that declined contract terms, which he believes lacks a solid legal foundation. “To say it rests on shaky legal ground,” Riggleman remarked, “would be generous.”

Reflecting on the government’s role in regulating technology, Riggleman expressed concern over the current state of oversight, stating, “These days, the government is no longer creating those safeguards—it’s destroying them.” He emphasized that society may not yet fully grasp what it means to have private firms controlling vast amounts of information about citizens.

The DOD has defended its contract with Anthropic, claiming it included adequate safeguards that limited AI usage to “all lawful purposes.” However, Anthropic contends that this language could be reinterpreted through executive orders or changes in statute, raising concerns that they would be complicit in actions leading to potential harm. Anthropic CEO Dario Amodei articulated this concern, stating, “We don’t want to sell something that could get our own people killed, or that could get innocent people killed.”

In contrast, OpenAI has asserted that its subsequent contract with the DOD is more secure than Anthropic’s. However, it too retains the “all lawful purposes” language, making its prohibitions reliant on the DOD’s adherence to legal norms. OpenAI CEO Sam Altman conceded that the deal was “definitely rushed” and that the optics surrounding it were not favorable. On Monday, OpenAI announced it had added restrictions regarding surveillance to its contract, although critics remain skeptical about their enforceability.

The developments of the past week evoke a broader reflection on the ethical responsibilities of technology companies. Anthropic, by standing firm against government pressures, may have secured something far more valuable than a contract: a foundational trust that could reshape its standing in an industry increasingly wary of compromising ethical principles.

See also
Staff
Written By

The AiPressa Staff team brings you comprehensive coverage of the artificial intelligence industry, including breaking news, research developments, business trends, and policy updates. Our mission is to keep you informed about the rapidly evolving world of AI technology.

You May Also Like

AI Tools

Unauthorized users accessed Anthropic's Mythos cybersecurity tool through a third-party vendor, raising serious enterprise security concerns.

AI Business

Top 10 private AI companies, led by Anthropic's $1 trillion valuation, surpass $2.5 trillion, outpacing 115 public SaaS firms valued at $1.88 trillion.

AI Cybersecurity

Anthropic's Mythos AI model dramatically compresses hacking processes, posing a severe cybersecurity threat by enabling rapid exploitation of software vulnerabilities.

AI Cybersecurity

Anthropic's Claude Mythos exposes thousands of zero-day vulnerabilities, compelling organizations to elevate cybersecurity budgets by 10% annually amid rising AI-enabled attacks.

AI Generative

OpenAI develops gpt-image-2 to deliver highly realistic AI-generated images, directly challenging competitors like Google and Anthropic.

AI Business

Anthropic's Mythos, now in select use by JPMorgan and other major banks, sparks urgent cybersecurity debates as European regulators assess its unprecedented threat.

AI Cybersecurity

NSA accesses Anthropic's Mythos AI for cybersecurity vulnerabilities, despite Pentagon's blacklist, highlighting urgent national defense implications.

Top Stories

Meta has recruited three key talents, including founding software engineer Mark Jen, from $12B startup Thinking Machines Lab, highlighting ongoing AI sector talent poaching.

© 2025 AIPressa · Part of Buzzora Media · All rights reserved. This website provides general news and educational content for informational purposes only. While we strive for accuracy, we do not guarantee the completeness or reliability of the information presented. The content should not be considered professional advice of any kind. Readers are encouraged to verify facts and consult appropriate experts when needed. We are not responsible for any loss or inconvenience resulting from the use of information on this site. Some images used on this website are generated with artificial intelligence and are illustrative in nature. They may not accurately represent the products, people, or events described in the articles.