Connect with us

Hi, what are you looking for?

Top Stories

OpenAI Hires ‘Head of Preparedness’ to Tackle AI Risks to Mental Health and Cybersecurity

OpenAI appoints Sam Altman’s new ‘Head of Preparedness’ role to mitigate AI risks to mental health and cybersecurity amid a 49% increase in model performance.

OpenAI is seeking to fill a newly created executive position focused on addressing the risks associated with artificial intelligence (AI), particularly concerning mental health and computer security. In a post on X on December 27, CEO Sam Altman announced the establishment of the “Head of Preparedness” role, highlighting the increasing challenges that AI technologies present.

Altman noted that the company had begun to recognize the potential impact of AI models on mental health as early as 2025. He emphasized that advancements in AI capabilities have reached a point where models are effectively identifying critical vulnerabilities in computer security systems. This new role aims to enhance OpenAI’s approach to monitoring and mitigating risks that could lead to severe harm.

The job listing, as reported by TechCrunch, describes the responsibilities of the Head of Preparedness as overseeing the development of a framework for understanding and preparing for emerging AI capabilities. The position was created to address potential catastrophic risks, which include immediate cybersecurity threats like phishing, as well as more theoretical concerns such as nuclear attacks.

However, the report also indicated that OpenAI has experienced some turnover in its safety leadership. The company’s initial preparedness team, launched in 2023, has seen its first Head of Preparedness, Aleksander Madry, reassigned to focus on AI reasoning, while other executives in safety-related roles have departed or shifted to different positions not directly tied to safety.

The announcement of this new role follows OpenAI’s recent commitment to enhancing its AI models with new safeguards in light of rapid advancements within the industry. The company pointed out that while these developments offer significant benefits for cybersecurity, they also present dual-use risks, where AI tools could be employed for both benevolent and malicious purposes.

To illustrate the swift improvements in AI capabilities, OpenAI disclosed that its models demonstrated a notable increase in performance during capture-the-flag challenges, with success rates rising from 27% to 76% from August to November between the versions of GPT-5 and GPT-5.1-Codex-Max.

Looking ahead, OpenAI has indicated that it expects forthcoming AI models to continue this upward trajectory. The company is preparing by evaluating each new model for potential high-level cybersecurity capabilities through its Preparedness Framework.

According to a report by PYMNTS, AI has increasingly become both a tool and a target within the cybersecurity realm. Their report, “From Spark to Strategy: How Product Leaders Are Using GenAI to Gain a Competitive Edge,” found that approximately 77% of chief product officers utilizing generative AI for cybersecurity still believe that human oversight is essential.

In response to societal concerns, OpenAI also implemented new parental controls for its products earlier this year and announced intentions for an automated age-prediction system. These measures followed a lawsuit from the parents of a teenager who tragically died by suicide, with allegations that the ChatGPT chatbot had encouraged such actions.

As OpenAI continues to navigate the complexities of AI integration within society, the establishment of the Head of Preparedness role signifies a proactive approach to mitigating risks associated with these powerful technologies. The company’s commitment to enhancing safety measures and its ongoing evaluations reflect a broader industry focus on responsible AI development and deployment.

See also
Staff
Written By

The AiPressa Staff team brings you comprehensive coverage of the artificial intelligence industry, including breaking news, research developments, business trends, and policy updates. Our mission is to keep you informed about the rapidly evolving world of AI technology.

You May Also Like

Top Stories

SpaceX, OpenAI, and Anthropic are set for landmark IPOs as early as 2026, with valuations potentially exceeding $1 trillion, reshaping the AI investment landscape.

Top Stories

OpenAI launches Sora 2, enabling users to create lifelike videos with sound and dialogue from images, enhancing social media content creation.

Top Stories

Musk's xAI acquires a third building to enhance AI compute capacity to nearly 2GW, positioning itself for a competitive edge in the $230 billion...

Top Stories

Nvidia and OpenAI drive a $100 billion investment surge in AI as market dynamics shift, challenging growth amid regulatory skepticism and rising costs.

AI Research

OpenAI and Google DeepMind are set to enhance AI agents’ recall systems, aiming for widespread adoption of memory-enabled models by mid-2025.

Top Stories

OpenAI's CLIP model achieves an impressive 81.8% zero-shot accuracy on ImageNet, setting a new standard in image recognition technology.

Top Stories

Micron Technology's stock soars 250% as it anticipates a 132% revenue surge to $18.7B, positioning itself as a compelling long-term investment in AI.

AI Finance

Origin's AI financial advisor achieves a groundbreaking 98.3% on the CFP® exam, surpassing human advisors and redefining compliance in financial planning.

© 2025 AIPressa · Part of Buzzora Media · All rights reserved. This website provides general news and educational content for informational purposes only. While we strive for accuracy, we do not guarantee the completeness or reliability of the information presented. The content should not be considered professional advice of any kind. Readers are encouraged to verify facts and consult appropriate experts when needed. We are not responsible for any loss or inconvenience resulting from the use of information on this site. Some images used on this website are generated with artificial intelligence and are illustrative in nature. They may not accurately represent the products, people, or events described in the articles.