Connect with us

Hi, what are you looking for?

AI Education

California Proposes AI Safety Measure for Youth, Backed by OpenAI and Common Sense Media

California proposes a ballot measure to enhance AI protections for minors, backed by OpenAI and Common Sense Media, mandating age assurance and data safeguards.

A proposed statewide ballot measure in California aims to enhance protections for children and teenagers who use artificial intelligence (AI), with advocates hoping to improve upon last year’s unsuccessful legislation. The initiative was introduced on Friday by Common Sense Media and has garnered public support from OpenAI. While it has not yet been introduced as a bill in the state Legislature, it seeks to implement requirements for age assurance, data protections, parental controls, and independent safety audits for AI products aimed at minors.

Common Sense Media, a nonprofit organization focused on evaluating media technology for families and educators, along with representatives from OpenAI, highlighted the measure as a response to rising parental concerns and insights gained from the previous legislative effort, Assembly Bill 1064, which failed to receive a signature from Governor Gavin Newsom last October.

“We truly believe in and support the best interests of kids and families,” said Jim Steyer, founder and CEO of Common Sense Media, during the press call. “And we need to put these critical protections, these seat belts in place right now. Our kids deserve nothing less.”

Last year, Common Sense Media sponsored AB 1064, written by Assemblymember Rebecca Bauer-Kahan, which passed through the Legislature but was vetoed by Newsom. His veto letter indicated concerns that the bill could effectively ban chatbot usage among minors, rather than facilitating safe interactions with AI tools. “The types of interactions that this bill seeks to address are abhorrent,” Newsom wrote, expressing a commitment to find a balanced approach to child protection without prohibiting technology use.

Leaders from Common Sense Media characterized the new proposal as a recalibrated version of AB 1064 that retains robust safety measures while avoiding restrictions that could limit youth access to AI. “This new measure really has the same intent as our original measure,” said Robbie Torney, the organization’s senior director of AI programs. “It articulates a really comprehensive set of safety standards that together we believe accomplish the same goal.”

Bruce Reed, head of AI at Common Sense Media, outlined critical requirements aimed at imposing new standards on AI systems used by minors, particularly tools marketed to schools as educational aids. Central to the proposal is the need for age assurance, requiring AI companies to determine whether users are under 18 and to apply child protections in cases where age cannot be definitively confirmed.

This standard could significantly influence procurement decisions and acceptable-use policies for educational districts, especially concerning platforms utilized both in and out of classroom settings. The proposal also seeks to prohibit child-targeted advertising and restrict the sale of minors’ data without parental consent, extending protections to all users under 18—currently, the California Consumer Privacy Act applies only to individuals under 16. Such provisions could impact educational technology vendors that rely on personalized engagement analytics, especially for middle and high school students.

Beyond privacy enhancements, Reed emphasized the necessity of safety requirements aimed at promoting student well-being. The proposal mandates safeguards to prevent AI systems from generating or promoting content related to self-harm, eating disorders, violence, or sexually explicit material. It also seeks to prevent emotional manipulation of minors by limiting the creation of emotional dependencies, such as simulating romantic relationships or misleading users into believing they are interacting with a human.

The ballot initiative would also require AI companies to offer robust parental controls, enabling parents to monitor and limit AI usage, and receive alerts if systems detect signs of self-harm. Reed noted features allowing parents to set time limits and disable memory features, highlighting that “turning off memory makes every chatbot exchange a fresh start,” which could mitigate risks of dependency.

Moreover, the proposal includes provisions for independent, third-party audits of child safety risks, with results to be reported to the California attorney general, along with annual risk assessments. According to Reed, continuous testing of chatbot safety is essential due to the evolving nature of AI systems. If enacted, this law could create ongoing compliance requirements specifically tied to child safety for educational technology vendors.

Chris Lehane, chief global affairs officer at OpenAI, indicated that the company’s support reflects a shared commitment to child safety and may serve as a model for other states and potentially for federal legislation. “AI knows a lot, parents know best,” he stated, articulating the principle behind OpenAI’s involvement. “Our aspiration is that this will not just be in California.”

Moving forward, speakers at the press call indicated a dual strategy: pursuing legislative action in Sacramento while also keeping the option for a ballot initiative open if necessary. The proposed regulatory framework aims to influence how educational districts evaluate AI tools and how vendors design products aimed at youth, balancing student safety with AI readiness across California classrooms.

“It is not a political partisan issue,” Steyer asserted. “All parents out there, all voters out there, pretty much everybody knows we need really serious protections for kids and teens and families as this goes forward.”

See also
David Park
Written By

At AIPressa, my work focuses on discovering how artificial intelligence is transforming the way we learn and teach. I've covered everything from adaptive learning platforms to the debate over ethical AI use in classrooms and universities. My approach: balancing enthusiasm for educational innovation with legitimate concerns about equity and access. When I'm not writing about EdTech, I'm probably exploring new AI tools for educators or reflecting on how technology can truly democratize knowledge without leaving anyone behind.

You May Also Like

AI Marketing

Five9 partners with Google Cloud to launch an AI-driven CX solution that enhances customer engagement for enterprises, streamlining interactions across channels.

AI Generative

New Mexico positions itself as a leader in AI governance to address high-stakes risks, aiming to enhance compliance and security for emerging technologies.

AI Cybersecurity

Threat actors launched a reconnaissance campaign probing over 73 major LLM endpoints, logging 80,469 sessions and revealing vulnerabilities that could lead to significant cyberattacks.

Top Stories

Alibaba's Qwen AI models hit 700 million downloads, driving a 9.8% surge in stock to $165.68 amid fierce competition in the AI sector.

AI Technology

Lancium secures $600M financing for a 1.2 GW Clean Campus in Texas, poised to power hyperscale AI data centers and advance sustainable digital infrastructure.

Top Stories

Google DeepMind's CTO Koray Kavukcuoglu reveals that despite launching Gemini 3, the path to artificial general intelligence remains undefined and purely research-focused.

AI Marketing

OpenAI highlights essential AI tools like ChatGPT and Canva, enabling beginners to enhance digital marketing strategies while emphasizing the evolution of human roles in...

Top Stories

Alibaba Cloud's Qwen AI models have surpassed 700 million downloads on Hugging Face, dominating global open-source AI adoption among developers.

© 2025 AIPressa · Part of Buzzora Media · All rights reserved. This website provides general news and educational content for informational purposes only. While we strive for accuracy, we do not guarantee the completeness or reliability of the information presented. The content should not be considered professional advice of any kind. Readers are encouraged to verify facts and consult appropriate experts when needed. We are not responsible for any loss or inconvenience resulting from the use of information on this site. Some images used on this website are generated with artificial intelligence and are illustrative in nature. They may not accurately represent the products, people, or events described in the articles.