Connect with us

Hi, what are you looking for?

AI Technology

OpenAI Hardware Chief Resigns Over Pentagon AI Deployment Concerns

OpenAI hardware chief Caitlin Kalinowski resigns over ethical concerns regarding the company’s swift AI partnership with the Pentagon’s classified networks.

Caitlin Kalinowski, the hardware lead at OpenAI, has resigned from her position after voicing concerns regarding the company’s recent partnership with the U.S. Department of War. In a post shared on X, Kalinowski indicated that her departure followed OpenAI’s decision to deploy its AI models on the Pentagon’s classified cloud networks, a move she believes was made too hastily without adequate internal or public discourse on its broader implications.

Kalinowski emphasized that while artificial intelligence can be pivotal in enhancing national security, certain ethical and governance boundaries require more extensive consideration. She pointed to critical issues such as the surveillance of Americans without judicial oversight and the potential development of lethal autonomous systems lacking clear human authorization. “These are too important for deals or announcements to be rushed,” she stressed in a follow-up post.

In the wake of Kalinowski’s resignation, OpenAI defended its approach to the partnership, asserting that it includes additional safeguards designed to limit the application of its technology. The company reiterated its commitment to prohibiting domestic surveillance and the deployment of autonomous weapons. In a statement to Reuters, OpenAI acknowledged that its involvement in this sector could provoke strong opinions and indicated its intention to continue engaging with various stakeholders, including employees, government representatives, and civil society groups, as the conversation develops.

OpenAI had revealed its partnership with the Pentagon just over a week ago, following unsuccessful negotiations between the Department of War and Anthropic, another AI firm, which had sought assurances against the use of its technology for mass surveillance or fully autonomous weaponry. OpenAI CEO Sam Altman emphasized in a post that the contract includes protections mirroring those that were contentious in Anthropic’s discussions.

Altman underscored that the agreement reflects two of OpenAI’s core safety principles: a ban on domestic mass surveillance and the necessity for human accountability in the use of force, particularly in autonomous systems. He noted that the Department of War has codified these principles within its laws and policies, affirming their incorporation into the contract.

“We also will build technical safeguards to ensure our models behave as they should, which the Department of War also wanted,” Altman wrote on X. He mentioned that OpenAI intends to deploy fail-safe mechanisms to enhance model safety and will operate exclusively on cloud networks. Furthermore, Altman urged the U.S. Department of the Treasury to extend similar terms to all AI companies, suggesting that such conditions should be industry standards. He expressed a preference for resolving conflicts through pragmatic agreements rather than through legal or governmental avenues.

During an all-hands meeting, Altman reportedly informed employees that the government would allow OpenAI to develop its own “safety stack” to prevent misuse of its technology. He assured that if an AI model declines to perform a specific task, the government would not compel the company to override that decision.

The evolving dialogue around AI and national security reflects broader societal concerns about the ethical implications of deploying advanced technologies in military applications. As companies like OpenAI navigate these complex waters, the balance between innovation and responsibility will be a focal point of scrutiny from stakeholders across the board.

See also
Staff
Written By

The AiPressa Staff team brings you comprehensive coverage of the artificial intelligence industry, including breaking news, research developments, business trends, and policy updates. Our mission is to keep you informed about the rapidly evolving world of AI technology.

You May Also Like

AI Government

US government mandates AI firms like Anthropic grant irrevocable “any lawful use” licenses for federal contracts amid rising scrutiny and procurement standards.

AI Regulation

Microsoft reports that 75% of employees use unauthorized AI tools, highlighting significant security risks as organizations face the rise of shadow AI.

AI Cybersecurity

OpenAI's Codex Security launches with an 84% noise reduction in vulnerability alerts, transforming application security for teams like NETGEAR.

Top Stories

OpenAI's robotics lead Caitlin Kalinowski resigns amid ethical concerns over the company’s Pentagon partnership, highlighting risks of AI in national security.

AI Government

Albanese government warns OpenAI and Anthropic to align AI development with Australian values or face a decade of strict regulations and penalties.

Top Stories

Amazon, Google, and Microsoft continue to support Anthropic AI despite Pentagon risk labels, emphasizing their commitment to AI innovation amid regulatory challenges.

AI Generative

All major LLMs, including OpenAI's GPT series, showed significant potential for academic fraud, with Grok-3 facilitating misconduct over 30% of the time.

AI Regulation

A bipartisan coalition has unveiled the Pro-Human Declaration, urging a moratorium on superintelligence development until safety consensus is achieved, reflecting 95% public opposition to...

© 2025 AIPressa · Part of Buzzora Media · All rights reserved. This website provides general news and educational content for informational purposes only. While we strive for accuracy, we do not guarantee the completeness or reliability of the information presented. The content should not be considered professional advice of any kind. Readers are encouraged to verify facts and consult appropriate experts when needed. We are not responsible for any loss or inconvenience resulting from the use of information on this site. Some images used on this website are generated with artificial intelligence and are illustrative in nature. They may not accurately represent the products, people, or events described in the articles.