Connect with us

Hi, what are you looking for?

AI Regulation

UK Government Announces AI Chatbot Rules Amid Online Safety Act Complexity

UK government plans to amend the Crime and Policing Bill to regulate AI chatbots, aiming for swift user protection against illegal content within months.

The UK government has confirmed plans to amend the Crime and Policing Bill, aiming to require AI chatbots, which are currently not covered by the Online Safety Act, to protect users from illegal content. Lauro Fava, a legal expert from Pinsent Masons, emphasized the significance of this development amid ongoing discussions about online safety regulations.

The Online Safety Act is a comprehensive piece of legislation that mandates online service providers to actively remove illegal content from their platforms. Key responsibilities are placed on services classified as high-risk or high-reach, which are obligated to proactively monitor and eliminate harmful content, particularly that which endangers children. The government estimates that around 100,000 online services, both domestic and international, fall under the Act’s purview.

Currently, the Crime and Policing Bill is progressing through parliament, having completed its passage through the House of Commons last summer and now reaching the report stage in the House of Lords. The government’s intention to regulate AI chatbots reflects a broader strategy to address online safety concerns incrementally. Officials have stated they aim to leverage new legal powers to act swiftly, potentially within months, when evidence suggests a need for intervention.

In conjunction with the chatbot regulations, the government plans to initiate a consultation next month focused on children’s online wellbeing. This consultation will explore various risks that children face in digital spaces and may lead to additional legislative measures. Among the proposed interventions are restrictions on features like “infinite scrolling,” the introduction of minimum age limits for social media, and strategies to prevent the sharing of nude images involving minors. Furthermore, the government is considering imposing regulations on children’s access to AI chatbots and assessing the use of virtual private networks (VPNs) for age verification.

Fava noted the challenges in simply adding AI chatbots to the existing framework of the Online Safety Act. He explained that the Act’s provisions primarily target social media and search services, suggesting that a tailored regulatory approach might be necessary for AI technologies. “The complexity of the Online Safety Act means that it may not be straightforward to simply add AI chatbots to its scope,” he said, stressing the need for careful crafting of new rules to avoid unintended consequences.

He further remarked on the urgency of legislative action, citing the lengthy process the Online Safety Act underwent before reaching enactment. “There is undoubtedly a need for the legislative process to move faster,” Fava stated, but he cautioned that new laws must still be grounded in thorough research and consultation to achieve their intended outcomes effectively.

Fava suggested that the government should streamline requirements to enhance the effectiveness of online safety measures. He argued that legislation focusing on the objectives that platforms are expected to achieve, rather than detailing complex rules, would be less likely to fall behind technological advancements. This approach would allow for quicker enactment and grant platforms the flexibility to develop solutions tailored to their specific needs.

However, the proposals to regulate VPNs could spark significant debate. Fava warned that imposing age verification requirements on VPN services—which are fundamentally designed to safeguard user privacy—might compromise their essential function. “Requiring them to verify the age of their users could undermine the purpose of the service,” he cautioned.

As the UK government moves forward with these initiatives, the implications for both users and technology companies are profound. The evolving landscape of online safety regulation is poised to reshape how digital platforms operate and interact with their users, particularly children, in an increasingly complex online environment.

See also
Staff
Written By

The AiPressa Staff team brings you comprehensive coverage of the artificial intelligence industry, including breaking news, research developments, business trends, and policy updates. Our mission is to keep you informed about the rapidly evolving world of AI technology.

You May Also Like

AI Research

UK government launches £40M Fundamental AI Research Lab to drive breakthroughs in healthcare and transport, positioning the UK as a global AI leader

AI Government

UK government abandons broad TDM exception for AI training, with 88% of respondents favoring stronger copyright protections in a pivotal copyright report.

AI Government

UK AI Minister Kanishka Narayan announces the AI Research Resource as essential infrastructure, empowering researchers and businesses with unparalleled computing power at Isambard Summit...

AI Government

UK government fails to initiate any trials with OpenAI's ChatGPT eight months post-agreement, raising concerns over accountability and public benefit.

AI Government

UK government unveils £2.5B Fusion Energy Strategy 2026, aiming to lead in commercial fusion power and integrate AI for enhanced energy solutions

AI Government

UK government to mandate AI content labeling by March 2026 to combat deepfakes, as the sector grows 23 times faster than the economy.

AI Regulation

UK government delays crucial AI legislation amid growing public demand for an independent regulator, with 89% favoring comprehensive reforms for effective oversight

AI Government

Legal experts declare the Home Office's use of AI in asylum assessments likely unlawful, citing a 9% error rate and lack of transparency that...

© 2025 AIPressa · Part of Buzzora Media · All rights reserved. This website provides general news and educational content for informational purposes only. While we strive for accuracy, we do not guarantee the completeness or reliability of the information presented. The content should not be considered professional advice of any kind. Readers are encouraged to verify facts and consult appropriate experts when needed. We are not responsible for any loss or inconvenience resulting from the use of information on this site. Some images used on this website are generated with artificial intelligence and are illustrative in nature. They may not accurately represent the products, people, or events described in the articles.