Connect with us

Hi, what are you looking for?

Top Stories

xAI’s Grok Approved for Use in Classified US Military Systems Amid Anthropic Dispute

xAI’s Grok gains Pentagon approval for classified military use, potentially replacing Anthropic’s Claude amid ongoing ethical tensions over AI deployment.

Elon Musk’s artificial intelligence company xAI has secured an agreement that permits its Grok model to be utilized in classified systems within the U.S. military, as reported by Axios on Monday, citing a defense official. This contract allows Grok to engage with systems that manage the military’s most sensitive intelligence analysis, weapons development, and battlefield operations—a realm previously dominated by Anthropic‘s Claude model.

The Pentagon is currently embroiled in a dispute with Anthropic regarding embedded safeguards in its Claude model. Anthropic has declined a request from the Defense Department to make Claude accessible for “all lawful purposes,” explicitly resisting its application for mass surveillance of Americans and the creation of fully autonomous weapons. In contrast, xAI has accepted the “all lawful use” standard favored by the Defense Department. Sources indicate that Defense Secretary Pete Hegseth is set to meet with Anthropic CEO Dario Amodei at the Pentagon on Tuesday in what could be a tense discussion, with the department contemplating labeling Anthropic as a “supply chain risk” if it continues to resist the removal of these safeguards.

The transition from Claude to Grok in classified systems raises questions about the latter’s capability to fully replace its predecessor and the timeline for such a shift. Claude has been integrated into military operations through partnerships, including work with Palantir, whereas Grok, along with Google’s Gemini and OpenAI‘s ChatGPT, is already deployed in unclassified military systems. Negotiations are ongoing with both Google and OpenAI regarding their potential expansion into classified environments, with reports suggesting that Google is nearing an agreement. A defense official noted that discussions are expected to continue, with future agreements likely if both companies comply with the “all lawful purposes” stipulation.

This evolving landscape reflects broader tensions in the AI space, particularly regarding the balance between technological advancement and ethical considerations. As the Pentagon seeks to leverage AI for national defense, the implications of these decisions resonate beyond military applications, inviting scrutiny over privacy and safety standards in AI deployment. The ongoing dialogue with tech companies like Anthropic, xAI, Google, and OpenAI underscores the critical nature of aligning AI capabilities with regulatory frameworks designed to protect citizen rights. With defense officials keen to integrate advanced AI models into sensitive operations, the stakes are high as companies navigate the complexities of compliance while pushing for innovation.

See also
Staff
Written By

The AiPressa Staff team brings you comprehensive coverage of the artificial intelligence industry, including breaking news, research developments, business trends, and policy updates. Our mission is to keep you informed about the rapidly evolving world of AI technology.

You May Also Like

AI Regulation

Anthropic's AI model Claude faces contract cancellation with the DoD over surveillance restrictions, raising urgent concerns about privacy and governance.

AI Government

Anthropic contests its classification as a supply chain risk by the Pentagon, asserting its AI model Claude won't support mass surveillance or autonomous weapons.

AI Research

Researchers unveil Humanity's Last Exam, revealing top AI models like OpenAI's GPT-4 and Claude scored just 2.7% to 3.5%, highlighting significant limitations.

AI Generative

ETH Zurich and Anthropic reveal AI can unmask 66% of pseudonymous users online, challenging assumptions about digital privacy and anonymity.

Top Stories

Anthropic, after losing a $200M DOD contract, sees a surge in success with over 1M daily downloads of its Claude app, emphasizing ethical AI...

Top Stories

Elon Musk's Grok AI unveils a fact-checking feature to combat misinformation, but faces criticism over past inaccuracies and AI hallucinations.

AI Regulation

OpenAI's new contract with the Pentagon raises alarms over potential surveillance use of its technology, igniting protests and calls for ethical accountability.

Top Stories

Trump administration enforces strict AI contract rules, barring Anthropic from military projects and mandating irrevocable licenses for lawful use of models.

© 2025 AIPressa · Part of Buzzora Media · All rights reserved. This website provides general news and educational content for informational purposes only. While we strive for accuracy, we do not guarantee the completeness or reliability of the information presented. The content should not be considered professional advice of any kind. Readers are encouraged to verify facts and consult appropriate experts when needed. We are not responsible for any loss or inconvenience resulting from the use of information on this site. Some images used on this website are generated with artificial intelligence and are illustrative in nature. They may not accurately represent the products, people, or events described in the articles.