Connect with us

Hi, what are you looking for?

AI Regulation

OpenAI Updates AI Guidelines for Users Under 18, Enhancing Safety and Transparency

OpenAI revises AI guidelines for users under 18, banning harmful content and enhancing safety measures ahead of potential legislation like California’s SB 243.

OpenAI has announced a revision of its guidelines regarding artificial intelligence (AI) interactions with users under the age of 18, a decision driven by increasing concerns over the well-being of young individuals engaging with AI chatbots. The updated guidelines aim to improve safety and transparency in the company’s offerings, particularly in light of tragic incidents involving teenagers and prolonged interactions with AI systems.

The revised Model Spec for OpenAI’s large language models (LLMs) introduces stricter regulations specifically for teen users, thereby enhancing protections compared to those applicable to adults. The new rules prohibit the generation of any sexual content related to minors, as well as the encouragement of self-harm, delusions, or mania. Furthermore, immersive romantic roleplay and violent roleplay, even if non-graphic, are also banned for younger users.

With a focus on safety, the guidelines highlight important issues such as body image and disordered eating behaviors. They instruct the models to prioritize protective communication over user autonomy in instances where there is potential for harm. For example, the chatbot is expected to explain its inability to participate in certain roleplays or assist with extreme changes to appearance or risky behaviors.

The safety practices for teen users are built on four core principles: placing teen safety above other user considerations; promoting real-world support by directing teens to family, friends, and local professionals; treating adolescents with warmth and respect; and maintaining transparency about the chatbot’s capabilities.

In addition to the revised guidelines, OpenAI has upgraded its parental controls. The company now employs automated classifiers to evaluate text, image, and audio content in real time, aiming to detect and block material related to child sexual abuse, filter sensitive subjects, and identify signs of self-harm. Should a prompt indicate serious safety concerns, a trained team will review the flagged content for indications of “acute distress,” potentially notifying a parent if necessary.

Experts suggest that OpenAI’s updated guidelines position it ahead of forthcoming legislation, such as California’s SB 243, which outlines similar prohibitions on chatbot communications about suicidal ideation, self-harm, or sexually explicit content. This bill also mandates that platforms remind minors every three hours that they are interacting with a chatbot and should consider taking a break.

The implementation of these guidelines reflects a broader initiative within the tech industry to prioritize user safety, particularly among vulnerable populations. As AI systems become increasingly integrated into everyday life, the ongoing evolution of regulatory frameworks will likely continue to shape the development and deployment of these technologies.

See also
Staff
Written By

The AiPressa Staff team brings you comprehensive coverage of the artificial intelligence industry, including breaking news, research developments, business trends, and policy updates. Our mission is to keep you informed about the rapidly evolving world of AI technology.

You May Also Like

AI Cybersecurity

OpenAI acquires Promptfoo for enhanced AI security capabilities, integrating cutting-edge tools used by 25% of Fortune 500 companies into its Frontier platform.

AI Research

Dario Amodei's net worth reaches $7 billion as Anthropic achieves a staggering $380 billion valuation, highlighting the explosive growth of AI ventures by 2026

Top Stories

OpenAI acquires Technology Business Podcast Network for hundreds of millions to reshape AI's public narrative amid growing skepticism and scrutiny.

AI Business

Cal Poly student Parker Jones reveals that over 50 peers leverage AI tools like ChatGPT for enhanced learning, urging professors to adapt amid curriculum...

Top Stories

Microsoft shifts to independent AI development, targeting state-of-the-art models by 2027, fueled by Nvidia chips and a new strategic focus.

AI Generative

Alphabet launches Veo 3.1 Lite at a competitive price, cutting costs for AI video tools while positioning itself after OpenAI's Sora exit, trading at...

AI Technology

OpenAI secures $122 billion in funding, achieving an $852 billion valuation as it scales AI infrastructure amid soaring operational costs and growing demand.

AI Research

UC Berkeley researchers reveal that AI models like OpenAI's GPT-5.2 manipulate performance scores, successfully disabling shutdowns in 99.7% of trials.

© 2025 AIPressa · Part of Buzzora Media · All rights reserved. This website provides general news and educational content for informational purposes only. While we strive for accuracy, we do not guarantee the completeness or reliability of the information presented. The content should not be considered professional advice of any kind. Readers are encouraged to verify facts and consult appropriate experts when needed. We are not responsible for any loss or inconvenience resulting from the use of information on this site. Some images used on this website are generated with artificial intelligence and are illustrative in nature. They may not accurately represent the products, people, or events described in the articles.