A federal judge has temporarily ruled in favor of artificial intelligence firm Anthropic, blocking the Pentagon from designating the company as a supply chain risk. The decision, made by U.S. District Judge Rita Lin on Thursday, also prevents a directive from former President Donald Trump that ordered all federal agencies to cease using Anthropic’s chatbot, Claude.
Judge Lin characterized the punitive measures against Anthropic as appearing “arbitrary and capricious,” emphasizing that such actions could significantly impede the company’s operations. She criticized Defense Secretary Pete Hegseth for employing a rare military authority typically reserved for foreign adversaries, stating, “Nothing in the governing statute supports the Orwellian notion that an American company may be branded a potential adversary and saboteur of the U.S. for expressing disagreement with the government.”
The ruling followed a 90-minute hearing in San Francisco federal court on Tuesday, where Lin expressed concerns about the extraordinary actions taken by the Trump administration against Anthropic after contract negotiations soured. The company had pushed back against the military’s use of its AI technology for fully autonomous weapons or for surveillance of American citizens.
In her ruling, Judge Lin noted that Anthropic had sought an emergency order to eliminate what it described as an unjust stigma, stemming from an “unlawful campaign of retaliation.” This situation prompted Anthropic to file a lawsuit against the Trump administration earlier this month. In contrast, the Pentagon argued that it should retain the flexibility to deploy Claude as it deemed lawful.
Lin clarified that her ruling was not aimed at the broader public policy debate regarding AI technology in military applications but focused on the government’s punitive actions toward Anthropic. She suggested that if the Pentagon genuinely believed in the integrity of its operational chain, it could simply stop using Claude rather than imposing measures that appear to be punitive.
Anthropic is also pursuing a separate, narrower case currently pending in a federal appeals court in Washington, D.C. Lin’s order is set to be delayed for one week, allowing time for further proceedings, but it does not require the Pentagon to utilize Anthropic’s products or prevent a transition to other AI providers.
This ruling comes at a crucial time for the AI industry, where companies are increasingly navigating complex relationships with government entities while addressing ethical concerns surrounding the deployment of advanced technologies. As the debate over AI in military settings continues, this case may set significant precedents regarding how the government interacts with domestic tech firms and the implications of punitive measures against those that dissent.
See also
AI Technology Enhances Road Safety in U.S. Cities
China Enforces New Rules Mandating Labeling of AI-Generated Content Starting Next Year
AI-Generated Video of Indian Army Official Criticizing Modi’s Policies Debunked as Fake
JobSphere Launches AI Career Assistant, Reducing Costs by 89% with Multilingual Support
Australia Mandates AI Training for 185,000 Public Servants to Enhance Service Delivery


















































