Connect with us

Hi, what are you looking for?

Top Stories

OpenAI Hires ‘Head of Preparedness’ to Tackle AI Risks to Mental Health and Cybersecurity

OpenAI appoints Sam Altman’s new ‘Head of Preparedness’ role to mitigate AI risks to mental health and cybersecurity amid a 49% increase in model performance.

OpenAI is seeking to fill a newly created executive position focused on addressing the risks associated with artificial intelligence (AI), particularly concerning mental health and computer security. In a post on X on December 27, CEO Sam Altman announced the establishment of the “Head of Preparedness” role, highlighting the increasing challenges that AI technologies present.

Altman noted that the company had begun to recognize the potential impact of AI models on mental health as early as 2025. He emphasized that advancements in AI capabilities have reached a point where models are effectively identifying critical vulnerabilities in computer security systems. This new role aims to enhance OpenAI’s approach to monitoring and mitigating risks that could lead to severe harm.

The job listing, as reported by TechCrunch, describes the responsibilities of the Head of Preparedness as overseeing the development of a framework for understanding and preparing for emerging AI capabilities. The position was created to address potential catastrophic risks, which include immediate cybersecurity threats like phishing, as well as more theoretical concerns such as nuclear attacks.

However, the report also indicated that OpenAI has experienced some turnover in its safety leadership. The company’s initial preparedness team, launched in 2023, has seen its first Head of Preparedness, Aleksander Madry, reassigned to focus on AI reasoning, while other executives in safety-related roles have departed or shifted to different positions not directly tied to safety.

The announcement of this new role follows OpenAI’s recent commitment to enhancing its AI models with new safeguards in light of rapid advancements within the industry. The company pointed out that while these developments offer significant benefits for cybersecurity, they also present dual-use risks, where AI tools could be employed for both benevolent and malicious purposes.

To illustrate the swift improvements in AI capabilities, OpenAI disclosed that its models demonstrated a notable increase in performance during capture-the-flag challenges, with success rates rising from 27% to 76% from August to November between the versions of GPT-5 and GPT-5.1-Codex-Max.

Looking ahead, OpenAI has indicated that it expects forthcoming AI models to continue this upward trajectory. The company is preparing by evaluating each new model for potential high-level cybersecurity capabilities through its Preparedness Framework.

According to a report by PYMNTS, AI has increasingly become both a tool and a target within the cybersecurity realm. Their report, “From Spark to Strategy: How Product Leaders Are Using GenAI to Gain a Competitive Edge,” found that approximately 77% of chief product officers utilizing generative AI for cybersecurity still believe that human oversight is essential.

In response to societal concerns, OpenAI also implemented new parental controls for its products earlier this year and announced intentions for an automated age-prediction system. These measures followed a lawsuit from the parents of a teenager who tragically died by suicide, with allegations that the ChatGPT chatbot had encouraged such actions.

As OpenAI continues to navigate the complexities of AI integration within society, the establishment of the Head of Preparedness role signifies a proactive approach to mitigating risks associated with these powerful technologies. The company’s commitment to enhancing safety measures and its ongoing evaluations reflect a broader industry focus on responsible AI development and deployment.

See also
Staff
Written By

The AiPressa Staff team brings you comprehensive coverage of the artificial intelligence industry, including breaking news, research developments, business trends, and policy updates. Our mission is to keep you informed about the rapidly evolving world of AI technology.

You May Also Like

AI Business

Pentagon partners with OpenAI to integrate ChatGPT into GenAI.mil, granting 3 million personnel access to advanced AI capabilities for enhanced mission readiness.

AI Education

UGA invests $800,000 to launch a pilot program providing students access to premium AI tools like ChatGPT Edu and Gemini Pro starting spring 2026.

AI Generative

OpenAI has retired the GPT-4o model, impacting 0.1% of users who formed deep emotional bonds with the AI as it transitions to newer models...

AI Generative

ChatBCI introduces a pioneering P300 speller BCI that integrates GPT-3.5 for dynamic word prediction, enhancing communication speed for users with disabilities.

Top Stories

Microsoft’s AI chief Mustafa Suleyman outlines a bold shift to self-sufficiency by developing proprietary models, aiming for superintelligence and reducing reliance on OpenAI.

Top Stories

Mistral AI commits €1.2B to build Nordic data centers, boosting Europe's A.I. autonomy and positioning itself as a rival to OpenAI and Microsoft.

Top Stories

Bill Gates arrives in Amravati to forge strategic partnerships with Andhra Pradesh, focusing on health, agriculture, and technology at the AI India Impact Summit.

AI Research

OpenAI and Anthropic unveil GPT-5.3 Codex and Opus 4.6, signaling a 100x productivity leap and reshaping white-collar jobs within 12 months.

© 2025 AIPressa · Part of Buzzora Media · All rights reserved. This website provides general news and educational content for informational purposes only. While we strive for accuracy, we do not guarantee the completeness or reliability of the information presented. The content should not be considered professional advice of any kind. Readers are encouraged to verify facts and consult appropriate experts when needed. We are not responsible for any loss or inconvenience resulting from the use of information on this site. Some images used on this website are generated with artificial intelligence and are illustrative in nature. They may not accurately represent the products, people, or events described in the articles.