Connect with us

Hi, what are you looking for?

AI Government

Trump Orders U.S. Agencies to Halt Anthropic AI Use Amid National Security Clash

Trump orders U.S. agencies to cease using Anthropic’s AI, citing national security risks, after CEO Dario Amodei refuses military access to tech safeguards.

WASHINGTON (AP) — The Trump administration has directed all U.S. agencies to cease using Anthropic’s artificial intelligence technology, marking a significant escalation in a public confrontation between the government and the company over AI safety protocols. This decision, announced on Friday, follows a series of social media criticisms from President Donald Trump, Defense Secretary Pete Hegseth, and other officials, who accused Anthropic of jeopardizing national security.

The conflict arose after CEO Dario Amodei declined to grant the military unrestricted access to the company’s AI tools by a specified deadline, citing concerns that such permissions could lead to violations of the safeguards built into its technology. Trump took to social media, declaring, “We don’t need it, we don’t want it, and will not do business with them again!”

Hegseth further labeled Anthropic as a “supply chain risk,” a classification typically reserved for foreign adversaries, which could undermine the company’s relationships with other businesses. Anthropic had requested assurances from the Pentagon that its AI chatbot, Claude, would not be utilized for mass surveillance of American citizens or in fully autonomous weapons systems. While the Pentagon stated it would deploy the technology only in lawful manners, it insisted on full access without limitations.

This situation reflects broader concerns regarding the role of AI in national security, especially as the technology becomes increasingly sophisticated. The government’s push to assert control over Anthropic’s internal decision-making underscores the contentious atmosphere surrounding AI’s potential uses in lethal operations, sensitive information management, and governmental surveillance practices.

Trump criticized Anthropic’s attempts to negotiate with the military, asserting that most agencies must immediately stop using its AI, while allowing the Pentagon a six-month period to phase out the technology already integrated into military platforms. He admonished the company to “better get their act together, and be helpful,” warning of “major civil and criminal consequences” if it did not comply.

The discourse around the standoff intensified following months of private discussions that erupted into a public debate. In response to the government’s new contract language, which Anthropic argued would permit the disregard of critical safeguards, Amodei stated that his company “cannot in good conscience accede” to such demands. While Anthropic can absorb the loss of this contract, the implications of the government’s actions could reverberate more widely, especially given the company’s rapid ascent from a small research lab in San Francisco to a significant player in the AI sector.

The unfolding dispute has sent shockwaves through Silicon Valley, with many venture capitalists, prominent AI scientists, and employees from competing firms, such as OpenAI and Google, expressing support for Amodei’s stance. Such dynamics may favor Elon Musk’s rival chatbot, Grok, which the Pentagon has expressed interest in incorporating into classified military networks. Musk, aligning himself with the Trump administration, remarked on social media that “Anthropic hates Western Civilization.”

Contrastingly, Sam Altman, CEO of OpenAI, voiced support for Anthropic, highlighting the need for ethical considerations in their operations. In a CNBC interview, he described the Pentagon’s threats as “concerning” and affirmed that OpenAI shares similar ethical boundaries regarding AI applications. Amodei had previously worked at OpenAI before co-founding Anthropic in 2021.

Retired Air Force General Jack Shanahan, who previously led the Pentagon’s AI initiatives, argued that targeting Anthropic distracts from the broader issues at stake. He noted that Claude is already in widespread use across various government sectors and emphasized that the red lines drawn by Anthropic were “reasonable.” Shanahan concluded that the AI models, including those like Claude and Grok, are still not fully equipped for high-stakes national security applications, particularly concerning autonomous weaponry.

As the situation develops, the implications of the clash between the Trump administration and Anthropic may set a precedent for future interactions between technology companies and government entities, particularly in the sensitive realm of artificial intelligence.

See also
Staff
Written By

The AiPressa Staff team brings you comprehensive coverage of the artificial intelligence industry, including breaking news, research developments, business trends, and policy updates. Our mission is to keep you informed about the rapidly evolving world of AI technology.

You May Also Like

Top Stories

Google DeepMind hires philosopher Henry Shevlin to guide ethical AI development and explore machine consciousness as AGI approaches reality

AI Education

CoreWeave secures a $21 billion deal with Meta and partners with Anthropic to enhance AI model deployment, responding to skyrocketing demand for compute capacity.

Top Stories

Stanford's AI Index reveals U.S. investment of $285.9B eclipses China's $12.4B, yet 95% of AI projects see no ROI and model gap narrows to...

AI Generative

Anthropic unveils Claude Opus 4.7, enhancing AI capabilities, while launching a full-stack app platform to streamline developer workflows.

AI Regulation

Fed Chair Powell and Treasury Sec Bessent convene top bank CEOs to address cybersecurity risks from Anthropic's Mythos AI, amid rising $23B fraud concerns.

AI Technology

Anthropic's Mythos AI model, deemed capable of executing complex cyber attacks, sparks urgent meetings among U.S. banking leaders over unprecedented global financial risks.

Top Stories

IMF warns that emerging AI cybersecurity threats, highlighted by Anthropic's "Mythos," could destabilize the global monetary system, urging urgent regulatory action.

Top Stories

Meta's Yann LeCun labels concerns over Anthropic's AI model Claude Mythos as exaggerated drama, questioning its groundbreaking claims amid cybersecurity debates.

© 2025 AIPressa · Part of Buzzora Media · All rights reserved. This website provides general news and educational content for informational purposes only. While we strive for accuracy, we do not guarantee the completeness or reliability of the information presented. The content should not be considered professional advice of any kind. Readers are encouraged to verify facts and consult appropriate experts when needed. We are not responsible for any loss or inconvenience resulting from the use of information on this site. Some images used on this website are generated with artificial intelligence and are illustrative in nature. They may not accurately represent the products, people, or events described in the articles.