Connect with us

Hi, what are you looking for?

Top Stories

OpenAI Hires Anthropic’s Dylan Scand as Head of Preparedness with $555K Salary

OpenAI appoints Anthropic’s Dylan Scand as head of preparedness with a $555K salary to enhance AI safety amid rising industry concerns.

OpenAI has strengthened its safety initiatives by appointing Dylan Scand as its new head of preparedness, a position he previously held at rival Anthropic. Announced on X by CEO Sam Altman on Wednesday, Scand’s role is noteworthy not only for its strategic importance amid rising AI safety concerns but also for its substantial compensation package, which could reach up to $555,000 plus equity.

Altman expressed his enthusiasm for Scand’s appointment, stating, “Things are about to move quite fast and we will be working with extremely powerful models soon.” He emphasized Scand’s qualifications, declaring him “by far the best candidate I have met, anywhere, for this role.” In the competitive landscape of AI research, Scand’s transition highlights OpenAI’s commitment to addressing the complex safety challenges associated with advanced AI technologies.

In a post on X, Scand reflected on his tenure at Anthropic, noting his gratitude for the “extraordinary people” he worked alongside. He underscored the need for vigilance, stating, “AI is advancing rapidly. The potential benefits are great — and so are the risks of extreme and even irrecoverable harm.” This acknowledgment of the duality of AI’s impact aligns with the growing conversation surrounding ethical considerations in technology development.

Altman described the head of preparedness role as “stressful,” suggesting that immediate and high-stakes decisions will be part of the job. The position requires an individual capable of leading technical teams, making critical decisions amidst uncertainty, and aligning diverse stakeholders around safety measures. OpenAI’s job description highlighted the necessity for deep expertise in machine learning, AI safety, and related risk areas, emphasizing the complexity of the challenges ahead.

OpenAI’s safety approach has been under scrutiny, particularly as tensions have surfaced within the organization. Several former employees, including a past head of its safety team, have departed in recent years, raising questions about the company’s internal dynamics. This scrutiny has been compounded by legal challenges, with lawsuits alleging that OpenAI’s tools contributed to harmful behaviors.

In a recent report, OpenAI revealed alarming statistics indicating that some users of its ChatGPT platform exhibited signs of mental health distress. The company estimated that approximately 560,000 users each week show “possible signs of mental health emergencies.” In response, OpenAI is engaging with mental health specialists to enhance how its chatbot interacts with users displaying psychological distress or unhealthy dependence on the technology.

The appointment of Scand signals an aggressive push by OpenAI to fortify its safety protocols and respond to the evolving landscape of AI technologies. As the company prepares to work with increasingly powerful models, Scand’s leadership will be crucial in navigating the complexities associated with AI deployment and ensuring responsible practices. The implications of these developments are far-reaching, potentially influencing industry standards and informing regulatory discussions around AI safety.

As OpenAI continues to innovate, the focus on safety measures and ethical considerations will likely play a pivotal role in shaping both public perception and regulatory frameworks in the AI space. The future of AI hinges not only on its technological advancements but also on how effectively organizations like OpenAI address the inherent risks and responsibilities linked to their creations.

See also
Staff
Written By

The AiPressa Staff team brings you comprehensive coverage of the artificial intelligence industry, including breaking news, research developments, business trends, and policy updates. Our mission is to keep you informed about the rapidly evolving world of AI technology.

You May Also Like

AI Generative

OpenAI's ChatGPT surpasses 800 million weekly active users, with the transformative GPT-5 launching advanced reasoning and real-time voice capabilities.

AI Education

Educators at a NYC training by the National Academy for AI Instruction, backed by $23M from AFT and AI leaders, aim to equip 400,000...

Top Stories

Pentagon plans to use Anthropic's AI model Claude for domestic surveillance raises privacy concerns as 71% of Americans fear government data oversight.

AI Generative

AI firms like OpenAI and Harvey AI drive Manhattan office leasing to 42.9 million square feet, the highest demand in a decade, with a...

AI Regulation

Fairfax County Schools are drafting a comprehensive AI policy, emphasizing student safety and responsible use, with a proposal expected by the end of the...

Top Stories

Nvidia projects $1 trillion in revenue by 2027, driven by agentic AI and $150 billion in venture capital, revolutionizing healthcare and drug discovery.

Top Stories

Multiverse Computing launches CompactifAI, delivering 50% cost reductions for deploying AI models from OpenAI, Meta, and others, revolutionizing enterprise access.

Top Stories

Google DeepMind elevates Jasjeet Sekhon as chief strategy officer to enhance AI leadership as Alphabet targets $650 billion in tech investments this year.

© 2025 AIPressa · Part of Buzzora Media · All rights reserved. This website provides general news and educational content for informational purposes only. While we strive for accuracy, we do not guarantee the completeness or reliability of the information presented. The content should not be considered professional advice of any kind. Readers are encouraged to verify facts and consult appropriate experts when needed. We are not responsible for any loss or inconvenience resulting from the use of information on this site. Some images used on this website are generated with artificial intelligence and are illustrative in nature. They may not accurately represent the products, people, or events described in the articles.