More than 30 employees from OpenAI and Google DeepMind filed a statement on Monday in support of Anthropic’s lawsuit against the U.S. Defense Department, which recently designated the AI firm as a supply-chain risk. This label, typically applied to foreign adversaries, was issued after Anthropic declined to permit the Department of Defense (DOD) to utilize its technology for mass surveillance of Americans or for the autonomous operation of weapons systems.
“The government’s designation of Anthropic as a supply chain risk was an improper and arbitrary use of power that has serious ramifications for our industry,” reads the brief, which includes signatures from notable figures such as Google DeepMind chief scientist Jeff Dean. The amicus brief appeared in court shortly after Anthropic filed two lawsuits against the DOD and other federal entities, with Wired first reporting the development.
The DOD’s assertion that it should be free to use artificial intelligence for any “lawful” purpose highlights a contentious issue in the sector. The agency’s rapid move to sign a contract with OpenAI immediately following the designation of Anthropic as a supply-chain risk raised alarms among some employees at ChatGPT’s parent company, who protested against their employer’s involvement in the matter.
In their legal filing, the Google and OpenAI employees argued that if the Pentagon was dissatisfied with its contractual agreement with Anthropic, it could have opted to terminate the contract and seek services from another leading AI provider. The brief also emphasized the potential repercussions of the DOD’s actions, warning that punishing a prominent U.S. AI company could undermine the country’s competitiveness in the artificial intelligence sector and chill open discussions regarding the risks and benefits associated with current AI technologies.
The employees expressed their belief that Anthropic’s caution regarding the use of its technology is not only justified but critical for ensuring safety. They contend that, in the absence of public laws governing the deployment of AI, the contractual and technical limitations that developers impose on their systems serve as vital safeguards against potential catastrophic misuse.
Many signatories of the statement have previously participated in open letters advocating for the DOD to revoke its designation of Anthropic and calling on their respective companies to refuse any unilateral use of their AI systems in contexts like mass surveillance or lethal force.
As the legal battle unfolds, the implications of the DOD’s designation for the broader AI landscape remain significant. The controversy underscores the delicate balance between national security and ethical considerations in the deployment of advanced technologies. Stakeholders across the industry are now closely monitoring the situation, as it could set a precedent affecting future interactions between defense agencies and technology firms.
See also
Germany”s National Team Prepares for World Cup Qualifiers with Disco Atmosphere
95% of AI Projects Fail in Companies According to MIT
AI in Food & Beverages Market to Surge from $11.08B to $263.80B by 2032
Satya Nadella Supports OpenAI’s $100B Revenue Goal, Highlights AI Funding Needs
Wall Street Recovers from Early Loss as Nvidia Surges 1.8% Amid Market Volatility




















































