Connect with us

Hi, what are you looking for?

AI Regulation

OpenAI Updates AI Guidelines for Users Under 18, Enhancing Safety and Transparency

OpenAI revises AI guidelines for users under 18, banning harmful content and enhancing safety measures ahead of potential legislation like California’s SB 243.

OpenAI has announced a revision of its guidelines regarding artificial intelligence (AI) interactions with users under the age of 18, a decision driven by increasing concerns over the well-being of young individuals engaging with AI chatbots. The updated guidelines aim to improve safety and transparency in the company’s offerings, particularly in light of tragic incidents involving teenagers and prolonged interactions with AI systems.

The revised Model Spec for OpenAI’s large language models (LLMs) introduces stricter regulations specifically for teen users, thereby enhancing protections compared to those applicable to adults. The new rules prohibit the generation of any sexual content related to minors, as well as the encouragement of self-harm, delusions, or mania. Furthermore, immersive romantic roleplay and violent roleplay, even if non-graphic, are also banned for younger users.

With a focus on safety, the guidelines highlight important issues such as body image and disordered eating behaviors. They instruct the models to prioritize protective communication over user autonomy in instances where there is potential for harm. For example, the chatbot is expected to explain its inability to participate in certain roleplays or assist with extreme changes to appearance or risky behaviors.

The safety practices for teen users are built on four core principles: placing teen safety above other user considerations; promoting real-world support by directing teens to family, friends, and local professionals; treating adolescents with warmth and respect; and maintaining transparency about the chatbot’s capabilities.

In addition to the revised guidelines, OpenAI has upgraded its parental controls. The company now employs automated classifiers to evaluate text, image, and audio content in real time, aiming to detect and block material related to child sexual abuse, filter sensitive subjects, and identify signs of self-harm. Should a prompt indicate serious safety concerns, a trained team will review the flagged content for indications of “acute distress,” potentially notifying a parent if necessary.

Experts suggest that OpenAI’s updated guidelines position it ahead of forthcoming legislation, such as California’s SB 243, which outlines similar prohibitions on chatbot communications about suicidal ideation, self-harm, or sexually explicit content. This bill also mandates that platforms remind minors every three hours that they are interacting with a chatbot and should consider taking a break.

The implementation of these guidelines reflects a broader initiative within the tech industry to prioritize user safety, particularly among vulnerable populations. As AI systems become increasingly integrated into everyday life, the ongoing evolution of regulatory frameworks will likely continue to shape the development and deployment of these technologies.

See also
Staff
Written By

The AiPressa Staff team brings you comprehensive coverage of the artificial intelligence industry, including breaking news, research developments, business trends, and policy updates. Our mission is to keep you informed about the rapidly evolving world of AI technology.

You May Also Like

AI Research

Krites boosts curated response rates by 3.9x for large language models while maintaining latency, revolutionizing AI caching efficiency.

AI Business

Pentagon partners with OpenAI to integrate ChatGPT into GenAI.mil, granting 3 million personnel access to advanced AI capabilities for enhanced mission readiness.

AI Education

UGA invests $800,000 to launch a pilot program providing students access to premium AI tools like ChatGPT Edu and Gemini Pro starting spring 2026.

AI Generative

OpenAI has retired the GPT-4o model, impacting 0.1% of users who formed deep emotional bonds with the AI as it transitions to newer models...

AI Generative

ChatBCI introduces a pioneering P300 speller BCI that integrates GPT-3.5 for dynamic word prediction, enhancing communication speed for users with disabilities.

Top Stories

Microsoft’s AI chief Mustafa Suleyman outlines a bold shift to self-sufficiency by developing proprietary models, aiming for superintelligence and reducing reliance on OpenAI.

Top Stories

Mistral AI commits €1.2B to build Nordic data centers, boosting Europe's A.I. autonomy and positioning itself as a rival to OpenAI and Microsoft.

AI Research

OpenAI and Anthropic unveil GPT-5.3 Codex and Opus 4.6, signaling a 100x productivity leap and reshaping white-collar jobs within 12 months.

© 2025 AIPressa · Part of Buzzora Media · All rights reserved. This website provides general news and educational content for informational purposes only. While we strive for accuracy, we do not guarantee the completeness or reliability of the information presented. The content should not be considered professional advice of any kind. Readers are encouraged to verify facts and consult appropriate experts when needed. We are not responsible for any loss or inconvenience resulting from the use of information on this site. Some images used on this website are generated with artificial intelligence and are illustrative in nature. They may not accurately represent the products, people, or events described in the articles.