The UK government has confirmed plans to amend the Crime and Policing Bill, aiming to require AI chatbots, which are currently not covered by the Online Safety Act, to protect users from illegal content. Lauro Fava, a legal expert from Pinsent Masons, emphasized the significance of this development amid ongoing discussions about online safety regulations.
The Online Safety Act is a comprehensive piece of legislation that mandates online service providers to actively remove illegal content from their platforms. Key responsibilities are placed on services classified as high-risk or high-reach, which are obligated to proactively monitor and eliminate harmful content, particularly that which endangers children. The government estimates that around 100,000 online services, both domestic and international, fall under the Act’s purview.
Currently, the Crime and Policing Bill is progressing through parliament, having completed its passage through the House of Commons last summer and now reaching the report stage in the House of Lords. The government’s intention to regulate AI chatbots reflects a broader strategy to address online safety concerns incrementally. Officials have stated they aim to leverage new legal powers to act swiftly, potentially within months, when evidence suggests a need for intervention.
In conjunction with the chatbot regulations, the government plans to initiate a consultation next month focused on children’s online wellbeing. This consultation will explore various risks that children face in digital spaces and may lead to additional legislative measures. Among the proposed interventions are restrictions on features like “infinite scrolling,” the introduction of minimum age limits for social media, and strategies to prevent the sharing of nude images involving minors. Furthermore, the government is considering imposing regulations on children’s access to AI chatbots and assessing the use of virtual private networks (VPNs) for age verification.
Fava noted the challenges in simply adding AI chatbots to the existing framework of the Online Safety Act. He explained that the Act’s provisions primarily target social media and search services, suggesting that a tailored regulatory approach might be necessary for AI technologies. “The complexity of the Online Safety Act means that it may not be straightforward to simply add AI chatbots to its scope,” he said, stressing the need for careful crafting of new rules to avoid unintended consequences.
He further remarked on the urgency of legislative action, citing the lengthy process the Online Safety Act underwent before reaching enactment. “There is undoubtedly a need for the legislative process to move faster,” Fava stated, but he cautioned that new laws must still be grounded in thorough research and consultation to achieve their intended outcomes effectively.
Fava suggested that the government should streamline requirements to enhance the effectiveness of online safety measures. He argued that legislation focusing on the objectives that platforms are expected to achieve, rather than detailing complex rules, would be less likely to fall behind technological advancements. This approach would allow for quicker enactment and grant platforms the flexibility to develop solutions tailored to their specific needs.
However, the proposals to regulate VPNs could spark significant debate. Fava warned that imposing age verification requirements on VPN services—which are fundamentally designed to safeguard user privacy—might compromise their essential function. “Requiring them to verify the age of their users could undermine the purpose of the service,” he cautioned.
As the UK government moves forward with these initiatives, the implications for both users and technology companies are profound. The evolving landscape of online safety regulation is poised to reshape how digital platforms operate and interact with their users, particularly children, in an increasingly complex online environment.
See also
OpenAI’s Rogue AI Safeguards: Decoding the 2025 Safety Revolution
US AI Developments in 2025 Set Stage for 2026 Compliance Challenges and Strategies
Trump Drafts Executive Order to Block State AI Regulations, Centralizing Authority Under Federal Control
California Court Rules AI Misuse Heightens Lawyer’s Responsibilities in Noland Case
Policymakers Urged to Establish Comprehensive Regulations for AI in Mental Health



















































