Connect with us

Hi, what are you looking for?

AI Regulation

OpenAI Updates AI Guidelines for Users Under 18, Enhancing Safety and Transparency

OpenAI revises AI guidelines for users under 18, banning harmful content and enhancing safety measures ahead of potential legislation like California’s SB 243.

OpenAI has announced a revision of its guidelines regarding artificial intelligence (AI) interactions with users under the age of 18, a decision driven by increasing concerns over the well-being of young individuals engaging with AI chatbots. The updated guidelines aim to improve safety and transparency in the company’s offerings, particularly in light of tragic incidents involving teenagers and prolonged interactions with AI systems.

The revised Model Spec for OpenAI’s large language models (LLMs) introduces stricter regulations specifically for teen users, thereby enhancing protections compared to those applicable to adults. The new rules prohibit the generation of any sexual content related to minors, as well as the encouragement of self-harm, delusions, or mania. Furthermore, immersive romantic roleplay and violent roleplay, even if non-graphic, are also banned for younger users.

With a focus on safety, the guidelines highlight important issues such as body image and disordered eating behaviors. They instruct the models to prioritize protective communication over user autonomy in instances where there is potential for harm. For example, the chatbot is expected to explain its inability to participate in certain roleplays or assist with extreme changes to appearance or risky behaviors.

The safety practices for teen users are built on four core principles: placing teen safety above other user considerations; promoting real-world support by directing teens to family, friends, and local professionals; treating adolescents with warmth and respect; and maintaining transparency about the chatbot’s capabilities.

In addition to the revised guidelines, OpenAI has upgraded its parental controls. The company now employs automated classifiers to evaluate text, image, and audio content in real time, aiming to detect and block material related to child sexual abuse, filter sensitive subjects, and identify signs of self-harm. Should a prompt indicate serious safety concerns, a trained team will review the flagged content for indications of “acute distress,” potentially notifying a parent if necessary.

Experts suggest that OpenAI’s updated guidelines position it ahead of forthcoming legislation, such as California’s SB 243, which outlines similar prohibitions on chatbot communications about suicidal ideation, self-harm, or sexually explicit content. This bill also mandates that platforms remind minors every three hours that they are interacting with a chatbot and should consider taking a break.

The implementation of these guidelines reflects a broader initiative within the tech industry to prioritize user safety, particularly among vulnerable populations. As AI systems become increasingly integrated into everyday life, the ongoing evolution of regulatory frameworks will likely continue to shape the development and deployment of these technologies.

See also
Staff
Written By

The AiPressa Staff team brings you comprehensive coverage of the artificial intelligence industry, including breaking news, research developments, business trends, and policy updates. Our mission is to keep you informed about the rapidly evolving world of AI technology.

You May Also Like

Top Stories

Analysts warn that unchecked AI enthusiasm from companies like OpenAI and Nvidia could mask looming market instability as geopolitical tensions escalate and regulations lag.

Top Stories

SpaceX, OpenAI, and Anthropic are set for landmark IPOs as early as 2026, with valuations potentially exceeding $1 trillion, reshaping the AI investment landscape.

Top Stories

OpenAI launches Sora 2, enabling users to create lifelike videos with sound and dialogue from images, enhancing social media content creation.

Top Stories

Musk's xAI acquires a third building to enhance AI compute capacity to nearly 2GW, positioning itself for a competitive edge in the $230 billion...

Top Stories

Nvidia and OpenAI drive a $100 billion investment surge in AI as market dynamics shift, challenging growth amid regulatory skepticism and rising costs.

AI Research

OpenAI and Google DeepMind are set to enhance AI agents’ recall systems, aiming for widespread adoption of memory-enabled models by mid-2025.

Top Stories

OpenAI's CLIP model achieves an impressive 81.8% zero-shot accuracy on ImageNet, setting a new standard in image recognition technology.

AI Generative

Researchers demonstrate that large language models achieve over 99% accuracy as world models, revolutionizing AI agent training with simulated environments.

© 2025 AIPressa · Part of Buzzora Media · All rights reserved. This website provides general news and educational content for informational purposes only. While we strive for accuracy, we do not guarantee the completeness or reliability of the information presented. The content should not be considered professional advice of any kind. Readers are encouraged to verify facts and consult appropriate experts when needed. We are not responsible for any loss or inconvenience resulting from the use of information on this site. Some images used on this website are generated with artificial intelligence and are illustrative in nature. They may not accurately represent the products, people, or events described in the articles.