A proposed statewide ballot measure in California aims to enhance protections for children and teenagers who use artificial intelligence (AI), with advocates hoping to improve upon last year’s unsuccessful legislation. The initiative was introduced on Friday by Common Sense Media and has garnered public support from OpenAI. While it has not yet been introduced as a bill in the state Legislature, it seeks to implement requirements for age assurance, data protections, parental controls, and independent safety audits for AI products aimed at minors.
Common Sense Media, a nonprofit organization focused on evaluating media technology for families and educators, along with representatives from OpenAI, highlighted the measure as a response to rising parental concerns and insights gained from the previous legislative effort, Assembly Bill 1064, which failed to receive a signature from Governor Gavin Newsom last October.
“We truly believe in and support the best interests of kids and families,” said Jim Steyer, founder and CEO of Common Sense Media, during the press call. “And we need to put these critical protections, these seat belts in place right now. Our kids deserve nothing less.”
Last year, Common Sense Media sponsored AB 1064, written by Assemblymember Rebecca Bauer-Kahan, which passed through the Legislature but was vetoed by Newsom. His veto letter indicated concerns that the bill could effectively ban chatbot usage among minors, rather than facilitating safe interactions with AI tools. “The types of interactions that this bill seeks to address are abhorrent,” Newsom wrote, expressing a commitment to find a balanced approach to child protection without prohibiting technology use.
Leaders from Common Sense Media characterized the new proposal as a recalibrated version of AB 1064 that retains robust safety measures while avoiding restrictions that could limit youth access to AI. “This new measure really has the same intent as our original measure,” said Robbie Torney, the organization’s senior director of AI programs. “It articulates a really comprehensive set of safety standards that together we believe accomplish the same goal.”
Bruce Reed, head of AI at Common Sense Media, outlined critical requirements aimed at imposing new standards on AI systems used by minors, particularly tools marketed to schools as educational aids. Central to the proposal is the need for age assurance, requiring AI companies to determine whether users are under 18 and to apply child protections in cases where age cannot be definitively confirmed.
This standard could significantly influence procurement decisions and acceptable-use policies for educational districts, especially concerning platforms utilized both in and out of classroom settings. The proposal also seeks to prohibit child-targeted advertising and restrict the sale of minors’ data without parental consent, extending protections to all users under 18—currently, the California Consumer Privacy Act applies only to individuals under 16. Such provisions could impact educational technology vendors that rely on personalized engagement analytics, especially for middle and high school students.
Beyond privacy enhancements, Reed emphasized the necessity of safety requirements aimed at promoting student well-being. The proposal mandates safeguards to prevent AI systems from generating or promoting content related to self-harm, eating disorders, violence, or sexually explicit material. It also seeks to prevent emotional manipulation of minors by limiting the creation of emotional dependencies, such as simulating romantic relationships or misleading users into believing they are interacting with a human.
The ballot initiative would also require AI companies to offer robust parental controls, enabling parents to monitor and limit AI usage, and receive alerts if systems detect signs of self-harm. Reed noted features allowing parents to set time limits and disable memory features, highlighting that “turning off memory makes every chatbot exchange a fresh start,” which could mitigate risks of dependency.
Moreover, the proposal includes provisions for independent, third-party audits of child safety risks, with results to be reported to the California attorney general, along with annual risk assessments. According to Reed, continuous testing of chatbot safety is essential due to the evolving nature of AI systems. If enacted, this law could create ongoing compliance requirements specifically tied to child safety for educational technology vendors.
Chris Lehane, chief global affairs officer at OpenAI, indicated that the company’s support reflects a shared commitment to child safety and may serve as a model for other states and potentially for federal legislation. “AI knows a lot, parents know best,” he stated, articulating the principle behind OpenAI’s involvement. “Our aspiration is that this will not just be in California.”
Moving forward, speakers at the press call indicated a dual strategy: pursuing legislative action in Sacramento while also keeping the option for a ballot initiative open if necessary. The proposed regulatory framework aims to influence how educational districts evaluate AI tools and how vendors design products aimed at youth, balancing student safety with AI readiness across California classrooms.
“It is not a political partisan issue,” Steyer asserted. “All parents out there, all voters out there, pretty much everybody knows we need really serious protections for kids and teens and families as this goes forward.”
See also
Edmonds College Launches AI and Robotics Programs to Address Workforce Demand in Snohomish County
Irish EdTech Firms Position for Success in US Amid Rising AI Learning Demand
Andrew Ng Advocates for Coding Skills Amid AI Evolution in Tech
AI’s Growing Influence in Higher Education: Balancing Innovation and Critical Thinking
AI in English Language Education: 6 Principles for Ethical Use and Human-Centered Solutions



















































