The Foundation for Individual Rights and Expression (FIRE) has filed a friend-of-the-court brief with the U.S. District Court of Northern California, challenging the Pentagon’s designation of AI company Anthropic as a supply chain risk. This filing, submitted alongside organizations such as the Electronic Frontier Foundation, the Cato Institute, Chamber of Progress, and the First Amendment Lawyers Association, argues that the Department of Defense’s actions infringe upon Anthropic’s First Amendment rights. The brief was facilitated by legal experts from Perkins Coie LLP, including Sopen B. Shah, Addison W. Bennett, and Sarah Grant.
The Pentagon’s designation arose from its belief that Anthropic is insufficiently “patriotic” and “fundamentally incompatible with American principles.” This characterization stems from Anthropic’s refusal to remove ethical guardrails from its artificial intelligence tools, which the Pentagon sought to utilize for developing fully autonomous weapons and conducting mass domestic surveillance. While Anthropic asserts its commitment to using AI to defend democratic values, it emphasizes the need for safeguards, stating that some applications of technology are “simply outside the bounds of what today’s technology can safely and reliably do.” Consequently, the company has established a Usage Policy, which includes provisions that prevent its AI model, Claude, from supporting autonomous weapons or mass surveillance.
However, the Pentagon’s stance shifted, demanding that Anthropic alter its technology to permit any supposedly “lawful purpose,” a category that includes the controversial uses the company initially refused. Anthropic’s steadfastness in maintaining its ethical guidelines led to the Pentagon’s retaliatory designation, which carries significant implications not only for Anthropic but also for its partners and customers. The designation threatens to impose a culture of coercion, where dissent is punished and compliance with government ideologies is enforced, stifling public discourse on AI technologies.
FIRE’s brief points to the Pentagon’s designation as a clear violation of Anthropic’s First Amendment rights, arguing that Claude is not merely a military tool but a dynamic AI system capable of engaging in complex dialogue. The brief highlights that Claude’s design reflects human choices and expressive capabilities, rather than static functionalities akin to traditional military hardware. The Pentagon’s requirement for Anthropic to remove safeguards from its AI system, thereby altering its communication and analysis outputs, is seen as an infringement on expressive freedoms. The brief asserts that for Anthropic to continue its contracts with the government without facing the supply chain risk label, it would have to compromise its fundamental principles, leading to compelled speech and a loss of autonomy.
The Pentagon’s actions, characterized by FIRE as a retaliatory measure against Anthropic’s stance, raise critical questions about government authority over private companies in the tech sector. The Secretary of Defense has openly acknowledged that the sanction is intended to coerce Anthropic into compliance, raising alarms about the implications for free expression in the tech industry. Officials have suggested that the intention behind sanctioning Anthropic is to create space for “more patriotic” businesses, further underscoring potential ideological discrimination.
The FIRE brief calls for judicial intervention, arguing that allowing the government to dictate the terms of Anthropic’s expressive outputs poses a significant threat not only to the company itself but also to broader innovation and expression in the technology sector. The repercussions of such government overreach could extend beyond Anthropic, chilling the rights of business leaders and innovators nationwide. A ruling in favor of Anthropic could reaffirm the importance of protecting free speech in the context of technological advancements, ensuring that debates surrounding AI governance encompass a diverse range of viewpoints, particularly concerning its ethical implications and potential societal impacts.
See also
DeepMind’s Demis Hassabis Reflects on AlphaGo’s ‘Move 37’ and Its Path to AGI
Germany”s National Team Prepares for World Cup Qualifiers with Disco Atmosphere
95% of AI Projects Fail in Companies According to MIT
AI in Food & Beverages Market to Surge from $11.08B to $263.80B by 2032
Satya Nadella Supports OpenAI’s $100B Revenue Goal, Highlights AI Funding Needs





















































