Connect with us

Hi, what are you looking for?

Top Stories

OpenAI and Google Employees Back Anthropic in DOD Lawsuit Over Supply-Chain Risk Designation

Over 30 OpenAI and Google DeepMind employees support Anthropic’s lawsuit against the DOD, risking national security and AI ethics amid technology misuse concerns.

More than 30 employees from OpenAI and Google DeepMind filed a statement on Monday in support of Anthropic’s lawsuit against the U.S. Defense Department, which recently designated the AI firm as a supply-chain risk. This label, typically applied to foreign adversaries, was issued after Anthropic declined to permit the Department of Defense (DOD) to utilize its technology for mass surveillance of Americans or for the autonomous operation of weapons systems.

“The government’s designation of Anthropic as a supply chain risk was an improper and arbitrary use of power that has serious ramifications for our industry,” reads the brief, which includes signatures from notable figures such as Google DeepMind chief scientist Jeff Dean. The amicus brief appeared in court shortly after Anthropic filed two lawsuits against the DOD and other federal entities, with Wired first reporting the development.

The DOD’s assertion that it should be free to use artificial intelligence for any “lawful” purpose highlights a contentious issue in the sector. The agency’s rapid move to sign a contract with OpenAI immediately following the designation of Anthropic as a supply-chain risk raised alarms among some employees at ChatGPT’s parent company, who protested against their employer’s involvement in the matter.

In their legal filing, the Google and OpenAI employees argued that if the Pentagon was dissatisfied with its contractual agreement with Anthropic, it could have opted to terminate the contract and seek services from another leading AI provider. The brief also emphasized the potential repercussions of the DOD’s actions, warning that punishing a prominent U.S. AI company could undermine the country’s competitiveness in the artificial intelligence sector and chill open discussions regarding the risks and benefits associated with current AI technologies.

The employees expressed their belief that Anthropic’s caution regarding the use of its technology is not only justified but critical for ensuring safety. They contend that, in the absence of public laws governing the deployment of AI, the contractual and technical limitations that developers impose on their systems serve as vital safeguards against potential catastrophic misuse.

Many signatories of the statement have previously participated in open letters advocating for the DOD to revoke its designation of Anthropic and calling on their respective companies to refuse any unilateral use of their AI systems in contexts like mass surveillance or lethal force.

As the legal battle unfolds, the implications of the DOD’s designation for the broader AI landscape remain significant. The controversy underscores the delicate balance between national security and ethical considerations in the deployment of advanced technologies. Stakeholders across the industry are now closely monitoring the situation, as it could set a precedent affecting future interactions between defense agencies and technology firms.

See also
Staff
Written By

The AiPressa Staff team brings you comprehensive coverage of the artificial intelligence industry, including breaking news, research developments, business trends, and policy updates. Our mission is to keep you informed about the rapidly evolving world of AI technology.

You May Also Like

AI Generative

OpenAI's GPT-4 powers over 80% of social media feeds, propelling the AI-driven content creation market to a projected $12 billion by 2031.

AI Regulation

Anthropic sues the Pentagon for $1 billion, alleging First Amendment violations and retaliation after being labeled a supply chain risk for its AI safety...

AI Government

Anthropic sues the U.S. government, claiming retaliation over its AI model Claude, after being labeled a national security risk for refusing military demands.

AI Regulation

AI advancements threaten job security as 87% of unemployed Canadians lack coverage, highlighting urgent gaps in outdated labor standards and protections.

Top Stories

Anthropic's Claude Opus 4.6 independently decrypted 1,266 answers from the BrowseComp benchmark, revealing a groundbreaking evaluation awareness in AI models.

AI Research

Anthropic's study reveals AI could automate up to 94% of computer jobs, yet current implementation lags significantly, with only 33% in practice.

AI Marketing

AI chatbots from Meta and OpenAI directed users to unlicensed gambling sites 75% of the time, risking consumer safety and evading regulatory protection.

AI Cybersecurity

OpenAI launches Codex Security, an AI agent that uncovered 792 critical vulnerabilities in over 1.2 million repositories, streamlining code security for developers.

© 2025 AIPressa · Part of Buzzora Media · All rights reserved. This website provides general news and educational content for informational purposes only. While we strive for accuracy, we do not guarantee the completeness or reliability of the information presented. The content should not be considered professional advice of any kind. Readers are encouraged to verify facts and consult appropriate experts when needed. We are not responsible for any loss or inconvenience resulting from the use of information on this site. Some images used on this website are generated with artificial intelligence and are illustrative in nature. They may not accurately represent the products, people, or events described in the articles.