Britain may implement an Australian-style ban on social media use for children under 16 as soon as this year, coupled with plans to enhance oversight of artificial intelligence chatbots that currently evade existing safety regulations. These proposed measures are part of a broader initiative by Prime Minister Keir Starmer’s government to address the digital threats facing young people more effectively.
Last month, the government initiated a consultation to explore the potential for a social media prohibition for children under 16, with officials examining ways to amend current legislation so that changes could be enacted shortly after the consultation concludes. This move follows similar actions taken in Spain, Greece, and Slovenia, as Australia pioneered the blocking of social media access for minors under 16.
Pressure on technology firms has escalated recently, particularly following reports that Elon Musk’s AI chatbot Grok produced non-consensual sexualized images. Such incidents have intensified calls for stronger regulations governing emerging technologies. The 2023 Online Safety Act in Britain is considered one of the most stringent digital safety frameworks globally; however, it does not currently cover private, one-on-one interactions with AI chatbots unless shared with other users, a regulatory gap that Technology Minister Liz Kendall intends to address.
Kendall expressed concerns over the potential risks posed by AI chatbots, particularly their impact on children and young adults. “I am concerned about these AI chatbots… as is the prime minister, about the impact that’s having on children and young people,” she told Times Radio, highlighting the lack of safety protocols in systems that children engage with on a personal level.
The government is expected to present its proposals before June. In recent comments to British media, Kendall noted that tech companies would be held accountable for ensuring compliance with UK law. Beyond the social media restrictions and AI reforms, ministers plan to consult on measures aimed at automatically preserving online data when a child dies, facilitating the securing of digital evidence—a policy long championed by bereaved families. Other considerations include limiting “stranger pairing” on gaming consoles and imposing restrictions on the exchange of nude images.
These proposed measures are anticipated to be introduced as amendments to existing crime and child-protection legislation currently under parliamentary review. While aimed at bolstering child safety, these initiatives could also raise broader concerns, particularly regarding adult privacy rights and online service access. Similar regulations have previously incited debates over free speech and regulatory jurisdiction, especially in relation to the United States.
In response to the challenges of implementing mandatory age verification systems, some major pornography websites have already chosen to block access for UK users. Nevertheless, such restrictions are often circumvented using readily available virtual private networks (VPNs). The government is now contemplating measures to restrict VPN access for minors, according to reports.
While many parents and online safety advocates back a social media ban for children, some child-protection groups voice apprehensions. Kendall acknowledged critics’ concerns that a strict ban might push harmful activities into less regulated areas of the internet or create a stark “cliff edge” effect when teenagers reach 16. She noted that a precise legal definition of what constitutes social media is necessary before any enforcement of a ban can occur.
As the digital landscape continues to evolve, the ramifications of these proposed regulatory changes could extend beyond child safety, potentially shaping the future of online engagement for all users. The government’s commitment to addressing these issues reflects a growing recognition of the need for robust frameworks to protect younger generations in an increasingly complex digital world.
See also
OpenAI’s Rogue AI Safeguards: Decoding the 2025 Safety Revolution
US AI Developments in 2025 Set Stage for 2026 Compliance Challenges and Strategies
Trump Drafts Executive Order to Block State AI Regulations, Centralizing Authority Under Federal Control
California Court Rules AI Misuse Heightens Lawyer’s Responsibilities in Noland Case
Policymakers Urged to Establish Comprehensive Regulations for AI in Mental Health















































