Connect with us

Hi, what are you looking for?

AI Regulation

UK Government Announces AI Chatbot Rules Amid Online Safety Act Complexity

UK government plans to amend the Crime and Policing Bill to regulate AI chatbots, aiming for swift user protection against illegal content within months.

The UK government has confirmed plans to amend the Crime and Policing Bill, aiming to require AI chatbots, which are currently not covered by the Online Safety Act, to protect users from illegal content. Lauro Fava, a legal expert from Pinsent Masons, emphasized the significance of this development amid ongoing discussions about online safety regulations.

The Online Safety Act is a comprehensive piece of legislation that mandates online service providers to actively remove illegal content from their platforms. Key responsibilities are placed on services classified as high-risk or high-reach, which are obligated to proactively monitor and eliminate harmful content, particularly that which endangers children. The government estimates that around 100,000 online services, both domestic and international, fall under the Act’s purview.

Currently, the Crime and Policing Bill is progressing through parliament, having completed its passage through the House of Commons last summer and now reaching the report stage in the House of Lords. The government’s intention to regulate AI chatbots reflects a broader strategy to address online safety concerns incrementally. Officials have stated they aim to leverage new legal powers to act swiftly, potentially within months, when evidence suggests a need for intervention.

In conjunction with the chatbot regulations, the government plans to initiate a consultation next month focused on children’s online wellbeing. This consultation will explore various risks that children face in digital spaces and may lead to additional legislative measures. Among the proposed interventions are restrictions on features like “infinite scrolling,” the introduction of minimum age limits for social media, and strategies to prevent the sharing of nude images involving minors. Furthermore, the government is considering imposing regulations on children’s access to AI chatbots and assessing the use of virtual private networks (VPNs) for age verification.

Fava noted the challenges in simply adding AI chatbots to the existing framework of the Online Safety Act. He explained that the Act’s provisions primarily target social media and search services, suggesting that a tailored regulatory approach might be necessary for AI technologies. “The complexity of the Online Safety Act means that it may not be straightforward to simply add AI chatbots to its scope,” he said, stressing the need for careful crafting of new rules to avoid unintended consequences.

He further remarked on the urgency of legislative action, citing the lengthy process the Online Safety Act underwent before reaching enactment. “There is undoubtedly a need for the legislative process to move faster,” Fava stated, but he cautioned that new laws must still be grounded in thorough research and consultation to achieve their intended outcomes effectively.

Fava suggested that the government should streamline requirements to enhance the effectiveness of online safety measures. He argued that legislation focusing on the objectives that platforms are expected to achieve, rather than detailing complex rules, would be less likely to fall behind technological advancements. This approach would allow for quicker enactment and grant platforms the flexibility to develop solutions tailored to their specific needs.

However, the proposals to regulate VPNs could spark significant debate. Fava warned that imposing age verification requirements on VPN services—which are fundamentally designed to safeguard user privacy—might compromise their essential function. “Requiring them to verify the age of their users could undermine the purpose of the service,” he cautioned.

As the UK government moves forward with these initiatives, the implications for both users and technology companies are profound. The evolving landscape of online safety regulation is poised to reshape how digital platforms operate and interact with their users, particularly children, in an increasingly complex online environment.

See also
Staff
Written By

The AiPressa Staff team brings you comprehensive coverage of the artificial intelligence industry, including breaking news, research developments, business trends, and policy updates. Our mission is to keep you informed about the rapidly evolving world of AI technology.

You May Also Like

AI Government

UK government employs AI tool Consult to analyze 50,000 public submissions in just 2 hours for £240, aiming to save 75,000 days of manual...

AI Generative

UK government teams up with Microsoft to establish a deepfake detection framework, enhancing safety against synthetic media misuse amid rising public concerns.

AI Government

UK Government reports 76% progress on its AI Opportunities Action Plan, achieving 38 out of 50 commitments as it aims for a £550 billion...

AI Education

Adobe partners with the UK Government to launch the Tech Towns initiative in Barnsley, aiming to equip 30 million learners with AI and digital...

AI Cybersecurity

Hubtel IT urges the UK Government to adapt the Cyber Security & Resilience Bill to counter AI-driven cyber threats, highlighting a 25% workforce expansion...

AI Government

UK government’s AI Skills Boost programme surpasses 1M course completions, aiming to upskill 10M workers by 2030 with £27M funding for local tech jobs.

AI Government

UK government announces £1.5 billion AI growth zone in Scotland, set to create 3,400 jobs and generate £8.2 billion in private investment.

AI Government

UK government unveils free AI training for 10 million adults, aiming to boost economic output by £140 billion and enhance workforce skills across SMEs.

© 2025 AIPressa · Part of Buzzora Media · All rights reserved. This website provides general news and educational content for informational purposes only. While we strive for accuracy, we do not guarantee the completeness or reliability of the information presented. The content should not be considered professional advice of any kind. Readers are encouraged to verify facts and consult appropriate experts when needed. We are not responsible for any loss or inconvenience resulting from the use of information on this site. Some images used on this website are generated with artificial intelligence and are illustrative in nature. They may not accurately represent the products, people, or events described in the articles.