Connect with us

Hi, what are you looking for?

Top Stories

Anthropic Gains Trust Amid $200M DOD Contract Loss, Surpassing 1M Daily App Downloads

Anthropic, after losing a $200M DOD contract, sees a surge in success with over 1M daily downloads of its Claude app, emphasizing ethical AI principles.

Last week marked a significant turning point for Anthropic, the artificial intelligence company that has drawn a line in the sand over its ethical principles. The firm refused to allow the U.S. government to use its products for surveillance of the American public or to direct autonomous weapons without human oversight. In response, the Department of Defense (DOD) terminated a $200 million contract with the company. The fallout included strong criticism from former President Donald Trump, who labeled Anthropic as “leftwing nut jobs” on Truth Social and urged federal agencies to cease using its products. Further escalating tensions, Defense Secretary Pete Hegseth designated Anthropic a “Supply-Chain Risk to National Security,” while its main competitor, OpenAI, quickly secured its own contract with the Pentagon.

Despite these challenges, early indications suggest that Anthropic’s principled stance could yield unexpected benefits. Following its confrontation with the DOD, the company’s AI model, Claude, gained significant traction, reportedly becoming one of the top 10 most-downloaded free apps in the United States according to Apple’s charts. The day after Hegseth’s announcement, Claude reached the number one spot, maintaining that position as of this writing, with downloads exceeding 1 million daily. Anthropic’s chief product officer confirmed that the company has surpassed its own sign-up records every day since the previous week across all available countries.

Downloads of Claude are not the only metric showing success; users are actively abandoning OpenAI’s flagship app, ChatGPT. On February 28, uninstalls of ChatGPT surged by 295 percent as details of OpenAI’s contract with the Pentagon emerged. Additionally, one-star reviews for ChatGPT spiked nearly 800 percent, while five-star reviews halved. This shift illustrates a growing dissatisfaction with OpenAI amidst concerns over its partnership with the government.

Anthropic’s principled stance has also garnered respect within the tech community. Letters of support for the firm are circulating among employees at competing companies, with one letter reportedly gathering 850 signatures by Monday. Many of these workers are demanding their employers align with Anthropic’s ethical boundaries, with some even threatening to resign if their companies do not comply.

The company has found allies beyond Silicon Valley as well. Before the dispute with the DOD, former Republican Representative Denver Riggleman, now leading a cybersecurity firm, was examining various AI partners. Following Anthropic’s ethical stand, Riggleman decided to collaborate exclusively with the company for future projects, stating that Anthropic “had its nonnegotiables,” and “we have ours.”

Riggleman believes that Hegseth’s designation of Anthropic as a supply-chain risk is likely to be overturned in court. Historically, the U.S. government has applied this label to foreign companies, such as Huawei, and never to an American firm. He noted that the label appears to be a retaliatory move against a company that declined contract terms, which he believes lacks a solid legal foundation. “To say it rests on shaky legal ground,” Riggleman remarked, “would be generous.”

Reflecting on the government’s role in regulating technology, Riggleman expressed concern over the current state of oversight, stating, “These days, the government is no longer creating those safeguards—it’s destroying them.” He emphasized that society may not yet fully grasp what it means to have private firms controlling vast amounts of information about citizens.

The DOD has defended its contract with Anthropic, claiming it included adequate safeguards that limited AI usage to “all lawful purposes.” However, Anthropic contends that this language could be reinterpreted through executive orders or changes in statute, raising concerns that they would be complicit in actions leading to potential harm. Anthropic CEO Dario Amodei articulated this concern, stating, “We don’t want to sell something that could get our own people killed, or that could get innocent people killed.”

In contrast, OpenAI has asserted that its subsequent contract with the DOD is more secure than Anthropic’s. However, it too retains the “all lawful purposes” language, making its prohibitions reliant on the DOD’s adherence to legal norms. OpenAI CEO Sam Altman conceded that the deal was “definitely rushed” and that the optics surrounding it were not favorable. On Monday, OpenAI announced it had added restrictions regarding surveillance to its contract, although critics remain skeptical about their enforceability.

The developments of the past week evoke a broader reflection on the ethical responsibilities of technology companies. Anthropic, by standing firm against government pressures, may have secured something far more valuable than a contract: a foundational trust that could reshape its standing in an industry increasingly wary of compromising ethical principles.

See also
Staff
Written By

The AiPressa Staff team brings you comprehensive coverage of the artificial intelligence industry, including breaking news, research developments, business trends, and policy updates. Our mission is to keep you informed about the rapidly evolving world of AI technology.

You May Also Like

AI Generative

ETH Zurich and Anthropic reveal AI can unmask 66% of pseudonymous users online, challenging assumptions about digital privacy and anonymity.

AI Regulation

OpenAI's new contract with the Pentagon raises alarms over potential surveillance use of its technology, igniting protests and calls for ethical accountability.

Top Stories

Trump administration enforces strict AI contract rules, barring Anthropic from military projects and mandating irrevocable licenses for lawful use of models.

Top Stories

Grok AI, developed by Elon Musk’s xAI, sparks controversy with a scathing roast of Trump, igniting debates on AI's role in political discourse.

Top Stories

Microsoft defends Anthropic's Claude AI amid a Pentagon blacklist, ensuring integration into enterprise tools for 29% of the market, potentially affecting $26B revenue by...

AI Technology

NUS Computing expands its AI curriculum with new degree programs and partnerships with OpenAI to enhance student access to cutting-edge AI technologies.

AI Government

Microsoft continues to support Anthropic's Claude models amid its Pentagon security risk designation, ensuring Azure clients retain access to vital AI technology.

AI Finance

Oracle plans to cut thousands of jobs amid a $50 billion expansion of AI data centers, anticipating reduced demand due to AI advancements.

© 2025 AIPressa · Part of Buzzora Media · All rights reserved. This website provides general news and educational content for informational purposes only. While we strive for accuracy, we do not guarantee the completeness or reliability of the information presented. The content should not be considered professional advice of any kind. Readers are encouraged to verify facts and consult appropriate experts when needed. We are not responsible for any loss or inconvenience resulting from the use of information on this site. Some images used on this website are generated with artificial intelligence and are illustrative in nature. They may not accurately represent the products, people, or events described in the articles.