Connect with us

Hi, what are you looking for?

Top Stories

OpenAI and Google Employees Back Anthropic in DOD Lawsuit Over Supply-Chain Risk Designation

Over 30 OpenAI and Google DeepMind employees support Anthropic’s lawsuit against the DOD, risking national security and AI ethics amid technology misuse concerns.

More than 30 employees from OpenAI and Google DeepMind filed a statement on Monday in support of Anthropic’s lawsuit against the U.S. Defense Department, which recently designated the AI firm as a supply-chain risk. This label, typically applied to foreign adversaries, was issued after Anthropic declined to permit the Department of Defense (DOD) to utilize its technology for mass surveillance of Americans or for the autonomous operation of weapons systems.

“The government’s designation of Anthropic as a supply chain risk was an improper and arbitrary use of power that has serious ramifications for our industry,” reads the brief, which includes signatures from notable figures such as Google DeepMind chief scientist Jeff Dean. The amicus brief appeared in court shortly after Anthropic filed two lawsuits against the DOD and other federal entities, with Wired first reporting the development.

The DOD’s assertion that it should be free to use artificial intelligence for any “lawful” purpose highlights a contentious issue in the sector. The agency’s rapid move to sign a contract with OpenAI immediately following the designation of Anthropic as a supply-chain risk raised alarms among some employees at ChatGPT’s parent company, who protested against their employer’s involvement in the matter.

In their legal filing, the Google and OpenAI employees argued that if the Pentagon was dissatisfied with its contractual agreement with Anthropic, it could have opted to terminate the contract and seek services from another leading AI provider. The brief also emphasized the potential repercussions of the DOD’s actions, warning that punishing a prominent U.S. AI company could undermine the country’s competitiveness in the artificial intelligence sector and chill open discussions regarding the risks and benefits associated with current AI technologies.

The employees expressed their belief that Anthropic’s caution regarding the use of its technology is not only justified but critical for ensuring safety. They contend that, in the absence of public laws governing the deployment of AI, the contractual and technical limitations that developers impose on their systems serve as vital safeguards against potential catastrophic misuse.

Many signatories of the statement have previously participated in open letters advocating for the DOD to revoke its designation of Anthropic and calling on their respective companies to refuse any unilateral use of their AI systems in contexts like mass surveillance or lethal force.

As the legal battle unfolds, the implications of the DOD’s designation for the broader AI landscape remain significant. The controversy underscores the delicate balance between national security and ethical considerations in the deployment of advanced technologies. Stakeholders across the industry are now closely monitoring the situation, as it could set a precedent affecting future interactions between defense agencies and technology firms.

See also
Staff
Written By

The AiPressa Staff team brings you comprehensive coverage of the artificial intelligence industry, including breaking news, research developments, business trends, and policy updates. Our mission is to keep you informed about the rapidly evolving world of AI technology.

You May Also Like

Top Stories

OpenAI's GPT Image 2 revolutionizes text-to-image generation, achieving 95% text accuracy and outperforming competitors like Midjourney and Stable Diffusion.

AI Regulation

NVIDIA announces a $40 billion diversified AI foundation portfolio, strategically addressing missed investments in OpenAI and Anthropic while boosting shares by 98.8%.

Top Stories

OpenAI acquires personal finance startup Hiro and media company TBPN to bolster talent and improve public image amid fierce competition from Anthropic.

AI Regulation

Anthropic's Claude Mythos launches with minimal EU regulatory input, raising alarms as concerns grow over unregulated AI amid a $300M pro-AI campaign in the...

AI Cybersecurity

Barclays CEO warns that Anthropic's Mythos AI, scoring 93.9% on SWE-bench, poses unprecedented cybersecurity risks for global banks.

Top Stories

Meta's 2026 smart glasses, praised by Wired as the best in the market, blend style with AI features, positioning the company ahead of Apple...

AI Tools

JPMorgan CEO Jamie Dimon warns that Anthropic's AI tool Mythos exposes thousands of vulnerabilities, escalating cybersecurity risks for financial institutions.

AI Cybersecurity

OpenAI's Industrial Policy warns of imminent AI superintelligence, highlighting job security anxieties as 18% of inquiries focus on AI's impact on employment.

© 2025 AIPressa · Part of Buzzora Media · All rights reserved. This website provides general news and educational content for informational purposes only. While we strive for accuracy, we do not guarantee the completeness or reliability of the information presented. The content should not be considered professional advice of any kind. Readers are encouraged to verify facts and consult appropriate experts when needed. We are not responsible for any loss or inconvenience resulting from the use of information on this site. Some images used on this website are generated with artificial intelligence and are illustrative in nature. They may not accurately represent the products, people, or events described in the articles.