Connect with us

Hi, what are you looking for?

AI Government

Trump Orders U.S. Agencies to Halt Anthropic AI Use Amid National Security Clash

Trump orders U.S. agencies to cease using Anthropic’s AI, citing national security risks, after CEO Dario Amodei refuses military access to tech safeguards.

WASHINGTON (AP) — The Trump administration has directed all U.S. agencies to cease using Anthropic’s artificial intelligence technology, marking a significant escalation in a public confrontation between the government and the company over AI safety protocols. This decision, announced on Friday, follows a series of social media criticisms from President Donald Trump, Defense Secretary Pete Hegseth, and other officials, who accused Anthropic of jeopardizing national security.

The conflict arose after CEO Dario Amodei declined to grant the military unrestricted access to the company’s AI tools by a specified deadline, citing concerns that such permissions could lead to violations of the safeguards built into its technology. Trump took to social media, declaring, “We don’t need it, we don’t want it, and will not do business with them again!”

Hegseth further labeled Anthropic as a “supply chain risk,” a classification typically reserved for foreign adversaries, which could undermine the company’s relationships with other businesses. Anthropic had requested assurances from the Pentagon that its AI chatbot, Claude, would not be utilized for mass surveillance of American citizens or in fully autonomous weapons systems. While the Pentagon stated it would deploy the technology only in lawful manners, it insisted on full access without limitations.

This situation reflects broader concerns regarding the role of AI in national security, especially as the technology becomes increasingly sophisticated. The government’s push to assert control over Anthropic’s internal decision-making underscores the contentious atmosphere surrounding AI’s potential uses in lethal operations, sensitive information management, and governmental surveillance practices.

Trump criticized Anthropic’s attempts to negotiate with the military, asserting that most agencies must immediately stop using its AI, while allowing the Pentagon a six-month period to phase out the technology already integrated into military platforms. He admonished the company to “better get their act together, and be helpful,” warning of “major civil and criminal consequences” if it did not comply.

The discourse around the standoff intensified following months of private discussions that erupted into a public debate. In response to the government’s new contract language, which Anthropic argued would permit the disregard of critical safeguards, Amodei stated that his company “cannot in good conscience accede” to such demands. While Anthropic can absorb the loss of this contract, the implications of the government’s actions could reverberate more widely, especially given the company’s rapid ascent from a small research lab in San Francisco to a significant player in the AI sector.

The unfolding dispute has sent shockwaves through Silicon Valley, with many venture capitalists, prominent AI scientists, and employees from competing firms, such as OpenAI and Google, expressing support for Amodei’s stance. Such dynamics may favor Elon Musk’s rival chatbot, Grok, which the Pentagon has expressed interest in incorporating into classified military networks. Musk, aligning himself with the Trump administration, remarked on social media that “Anthropic hates Western Civilization.”

Contrastingly, Sam Altman, CEO of OpenAI, voiced support for Anthropic, highlighting the need for ethical considerations in their operations. In a CNBC interview, he described the Pentagon’s threats as “concerning” and affirmed that OpenAI shares similar ethical boundaries regarding AI applications. Amodei had previously worked at OpenAI before co-founding Anthropic in 2021.

Retired Air Force General Jack Shanahan, who previously led the Pentagon’s AI initiatives, argued that targeting Anthropic distracts from the broader issues at stake. He noted that Claude is already in widespread use across various government sectors and emphasized that the red lines drawn by Anthropic were “reasonable.” Shanahan concluded that the AI models, including those like Claude and Grok, are still not fully equipped for high-stakes national security applications, particularly concerning autonomous weaponry.

As the situation develops, the implications of the clash between the Trump administration and Anthropic may set a precedent for future interactions between technology companies and government entities, particularly in the sensitive realm of artificial intelligence.

See also
Staff
Written By

The AiPressa Staff team brings you comprehensive coverage of the artificial intelligence industry, including breaking news, research developments, business trends, and policy updates. Our mission is to keep you informed about the rapidly evolving world of AI technology.

You May Also Like

AI Technology

Trump bans federal use of Anthropic's AI systems amid military concerns, signaling a major shift in U.S. policy on AI in defense applications.

AI Cybersecurity

Pentagon initiates partnerships with tech giants like Anthropic to develop AI aimed at disabling China's critical power infrastructure during conflicts.

Top Stories

University of Washington and Microsoft expand partnership to enhance AI workforce readiness with $165M investment in education and research initiatives.

AI Government

Pentagon demands unrestricted access to Anthropic's Claude AI by 5:01 p.m., threatening to invoke the Defense Production Act if denied amid a sovereignty crisis.

Top Stories

Anthropic secures $30B in Series G funding, boosting its valuation to $380B, while launching Claude CoWork tools that promise to revolutionize wealth management efficiency.

AI Government

Over 100 Google employees urge the company to reject military ties as Anthropic resists Pentagon pressure despite a $200 million contract.

AI Regulation

Anthropic rejects the Pentagon's proposed changes to a $200M AI contract, prioritizing safeguards against misuse for surveillance and autonomous weapons.

AI Cybersecurity

PNNL and Anthropic launch ALOHA, slashing cyber attack replication time from weeks to hours, drastically reducing costs for critical infrastructure defense.

© 2025 AIPressa · Part of Buzzora Media · All rights reserved. This website provides general news and educational content for informational purposes only. While we strive for accuracy, we do not guarantee the completeness or reliability of the information presented. The content should not be considered professional advice of any kind. Readers are encouraged to verify facts and consult appropriate experts when needed. We are not responsible for any loss or inconvenience resulting from the use of information on this site. Some images used on this website are generated with artificial intelligence and are illustrative in nature. They may not accurately represent the products, people, or events described in the articles.