Connect with us

Hi, what are you looking for?

AI Government

UK Government Uses AI Tool Consult to Analyze 50,000 Submissions in 2 Hours

UK government employs AI tool Consult to analyze 50,000 public submissions in just 2 hours for £240, aiming to save 75,000 days of manual analysis annually

British officials have begun leveraging artificial intelligence to expedite the evaluation of public consultation responses as part of a major overhaul of the country’s water sector. Faced with over 50,000 submissions, the UK government employed an in-house AI tool named Consult, which is part of the “Humphrey” suite. This tool reportedly sorted through the responses into thematic categories in approximately two hours for just £240, followed by 22 hours of expert checks. Scaling this method across government departments could save up to 75,000 days of manual analysis annually.

A government spokesperson emphasized that “AI has the potential to transform how government works — saving time on routine administrative tasks and freeing up civil servants to focus on what matters most: delivering better public services for the British people.” The spokesperson also noted that guidelines and audits are in place to ensure the responsible use of AI across governmental functions.

Experts, however, caution that while AI can streamline administrative processes, there are risks involved. Chris Schmitz, a researcher at the Hertie School in Berlin, highlights the potential for public participation to be undermined if not properly managed. “Government services are easily flooded with requests,” he explains, emphasizing the importance of safeguards to ensure genuine public engagement is not diluted by overwhelming amounts of input that may not represent true citizen sentiment.

Globally, the trend of using AI to facilitate legislative processes is gaining traction. The Italian Senate has already implemented AI to manage the complexities of numerous amendments, identifying overlaps and potential attempts to stall discussions. Recently, the European Commission also released a tender for multilingual chatbots aimed at helping users navigate legal obligations under the EU AI and Digital Services Acts.

In Brazil, the Chamber of Deputies is enhancing its Ulysses program to analyze legislative materials and is integrating external AI platforms like Claude, Gemini, and GPT, while ensuring strong security and transparency. Likewise, New Zealand’s Parliamentary Counsel Office is exploring AI for drafting initial explanatory notes on proposed legislation, with particular attention to data sovereignty concerns. Meanwhile, Estonia’s Prime Minister Kristen Michal has advocated for AI’s usage in legislative reviews to prevent errors, noting past mistakes that significantly impacted tax revenues.

“This could be a smart and practical solution,” Michal stated, underscoring the need for critical thinking alongside AI use. He views AI as a powerful tool that can empower lawmakers, rather than replace them, provided that it is implemented with caution.

Despite the promise of AI, the risk of undermining legislative legitimacy looms large. Schmitz warns that foreign entities could exploit public consultation processes by inundating government offices with fabricated or insincere submissions, effectively executing a legislative denial of service. This could further erode trust in governmental processes; surveys indicate that 11 out of 28 countries assessed in Edelman’s annual trust barometer find governments more distrusted than trusted. In the UK, a recent survey showed that only 29% of citizens trust their government to use AI accurately and fairly.

While tools like Consult can assist in sifting through public input, their integration into decision-making processes raises transparency concerns. Ruth Fox, director of the Hansard Society, points out that human oversight remains essential to validate AI outputs, warning that reliance on automated tools could lead to critical inaccuracies going unchecked. “You still need human eyes and a human brain to check that the themes and sentiments it produces are actually accurate,” Fox asserts.

Furthermore, Joanna Bryson, an AI ethicist at the Hertie School, cautions against the fragility of AI models. Changes in model performance, outages, and vendor leverage can disrupt decision-making processes if AI is embedded too deeply in the legislative framework. “The goal should be systems that can be audited and owned — someone that can be held accountable who did what, when,” she suggests.

In the United States, the federal government is also stepping into the AI arena, planning to utilize Google’s Gemini to facilitate the deregulation process. However, this approach has sparked concerns about the potential legal ramifications of AI-driven regulation. Philip Wallach, a senior fellow at the American Enterprise Institute, warns that relying heavily on AI could lead to significant procedural oversights, exposing the government to legal challenges.

As countries navigate the complexities of integrating AI into their legislative frameworks, the challenge lies in balancing efficiency with accountability. Experts caution against viewing AI as merely a technological fix, suggesting that a systemic approach is essential to maintain public trust. Schmitz concludes, “There’s a huge opportunity to win back a lot of trust if designed for. But it could also be really, really bad for trust if it’s not.”

See also
Staff
Written By

The AiPressa Staff team brings you comprehensive coverage of the artificial intelligence industry, including breaking news, research developments, business trends, and policy updates. Our mission is to keep you informed about the rapidly evolving world of AI technology.

You May Also Like

AI Education

Sir Nick Clegg joins Efekta Education Group's Advisory Board to drive AI solutions enhancing education for over 4 million students, boosting engagement by 95%...

AI Regulation

Around 10,000 writers, led by Ed Newton-Rex, protest AI copyright threats at the London Book Fair as publishers propose a collective licensing scheme.

AI Finance

UK finance firms must enhance AI security with five essential tactics, as reports show boards are prioritizing trust and resilience amid rising risks.

AI Research

UK government launches a £40M Fundamental AI Research Lab to solve critical issues like hallucinations and boost innovation in healthcare and transport.

AI Government

Ipsos survey reveals 37% of Brits see AI as a risk to public services, with 51% fearing reduced human interaction and 50% anticipating job...

AI Research

UK government unveils £40M Fundamental AI Research Lab to tackle core challenges like hallucinations and boost the nation's AI leadership.

AI Government

UK lawmakers recommend a licensing framework for AI training data to protect creators, with a critical report on AI's economic impact due March 18.

AI Government

UK's £27M AI skills program falls short as 56% of CEOs report no ROI from their AI investments, highlighting a critical skills gap in...

© 2025 AIPressa · Part of Buzzora Media · All rights reserved. This website provides general news and educational content for informational purposes only. While we strive for accuracy, we do not guarantee the completeness or reliability of the information presented. The content should not be considered professional advice of any kind. Readers are encouraged to verify facts and consult appropriate experts when needed. We are not responsible for any loss or inconvenience resulting from the use of information on this site. Some images used on this website are generated with artificial intelligence and are illustrative in nature. They may not accurately represent the products, people, or events described in the articles.