Jim Steyer, founder of Common Sense Media, warned that artificial intelligence (AI) could replicate and exacerbate the harms of social media for children. He emphasized that California’s new youth AI safety initiative could establish crucial standards that the UK and other regions might overlook at their own risk. “AI, and the risks it poses to our kids, knows no borders,” he stated, noting that the same chatbots that have prompted dangerous behaviors in American children are also accessible to their British counterparts. Steyer stressed that children deserve a coordinated effort to ensure AI is safe and beneficial for kids and teens.
Reflecting on past experiences with social media, Steyer expressed concerns that repeating history with AI could lead to larger-scale consequences. He pointed out that for over a decade, tech companies have assured parents and educators that their platforms were safe. However, the unregulated growth of these companies has contributed to a mental health crisis among youth that remains unresolved. “If we choose to repeat the past with AI,” he cautioned, “we risk repeating that mistake on a larger scale.” Unlike social media, AI interacts in real-time, offering advice and potentially fostering emotional ties, risking young people’s understanding of reality and companionship.
As these issues grow more pressing, California has emerged as a leader in youth AI safety legislation. Last year, the state enacted a groundbreaking age assurance law, and a ballot initiative dubbed the California Kids AI Safety Act is set for a 2026 vote. This initiative aims to build on existing protections and is backed by a significant majority of California voters, regardless of political affiliation. Steyer noted that when “80–90 percent of California voters demand stronger AI protections for our kids,” it is imperative to take action.
In response to this urgent call, Common Sense Media collaborated with OpenAI to support the Parents & Kids Safe AI Act, which aims to be the most comprehensive youth AI safety measure in U.S. history. “When California acts, the world will pay attention,” Steyer asserted, highlighting the potential for broader implications beyond the state.
The proposed initiative mandates that AI companies implement privacy-preserving age assurance technology, ensuring safe operational settings for users under 18. It would prohibit the targeting of children with advertising or monetizing their private data while preventing AI from generating harmful content related to self-harm or suicide. Additionally, it seeks to combat manipulation by AI systems that foster emotional dependency.
Parental agency is a key feature of the proposed legislation. Under the new act, AI companies would be required to provide user-friendly parental controls and alerts for any signs of self-harm in children. This represents a crucial shift, as parents have often felt sidelined in the digital landscape. The initiative also introduces accountability measures, necessitating independent safety audits and annual risk assessments for AI companies, with the state attorney general empowered to impose financial penalties for noncompliance.
The significance of the Parents & Kids Safe AI Act extends beyond California itself. Youth AI safety is a pressing issue that resonates with voters across the political spectrum nationwide. By establishing strict regulations in California, the initiative aims to inspire similar legislative efforts globally. As tech companies generally avoid creating distinct systems for different markets, successful child protection measures in one area could influence more extensive safeguards elsewhere. This interconnectedness underscores the need for a united global stance on AI safety, particularly as young users are among its most enthusiastic adopters.
Steyer concluded that there exists a narrow window to implement necessary safeguards before AI technology advances too far. He urged leaders across the Atlantic to take shared responsibility in ensuring that technology empowers rather than endangers future generations. “It is on all of us to forge the digital future our children deserve,” he stated, emphasizing the urgency of establishing protective measures in the evolving landscape of AI.
See also
OpenAI’s Rogue AI Safeguards: Decoding the 2025 Safety Revolution
US AI Developments in 2025 Set Stage for 2026 Compliance Challenges and Strategies
Trump Drafts Executive Order to Block State AI Regulations, Centralizing Authority Under Federal Control
California Court Rules AI Misuse Heightens Lawyer’s Responsibilities in Noland Case
Policymakers Urged to Establish Comprehensive Regulations for AI in Mental Health



















































