Connect with us

Hi, what are you looking for?

AI Government

UK Government Uses AI Tool Consult to Analyze 50,000 Submissions in 2 Hours

UK government employs AI tool Consult to analyze 50,000 public submissions in just 2 hours for £240, aiming to save 75,000 days of manual analysis annually

British officials have begun leveraging artificial intelligence to expedite the evaluation of public consultation responses as part of a major overhaul of the country’s water sector. Faced with over 50,000 submissions, the UK government employed an in-house AI tool named Consult, which is part of the “Humphrey” suite. This tool reportedly sorted through the responses into thematic categories in approximately two hours for just £240, followed by 22 hours of expert checks. Scaling this method across government departments could save up to 75,000 days of manual analysis annually.

A government spokesperson emphasized that “AI has the potential to transform how government works — saving time on routine administrative tasks and freeing up civil servants to focus on what matters most: delivering better public services for the British people.” The spokesperson also noted that guidelines and audits are in place to ensure the responsible use of AI across governmental functions.

Experts, however, caution that while AI can streamline administrative processes, there are risks involved. Chris Schmitz, a researcher at the Hertie School in Berlin, highlights the potential for public participation to be undermined if not properly managed. “Government services are easily flooded with requests,” he explains, emphasizing the importance of safeguards to ensure genuine public engagement is not diluted by overwhelming amounts of input that may not represent true citizen sentiment.

Globally, the trend of using AI to facilitate legislative processes is gaining traction. The Italian Senate has already implemented AI to manage the complexities of numerous amendments, identifying overlaps and potential attempts to stall discussions. Recently, the European Commission also released a tender for multilingual chatbots aimed at helping users navigate legal obligations under the EU AI and Digital Services Acts.

In Brazil, the Chamber of Deputies is enhancing its Ulysses program to analyze legislative materials and is integrating external AI platforms like Claude, Gemini, and GPT, while ensuring strong security and transparency. Likewise, New Zealand’s Parliamentary Counsel Office is exploring AI for drafting initial explanatory notes on proposed legislation, with particular attention to data sovereignty concerns. Meanwhile, Estonia’s Prime Minister Kristen Michal has advocated for AI’s usage in legislative reviews to prevent errors, noting past mistakes that significantly impacted tax revenues.

“This could be a smart and practical solution,” Michal stated, underscoring the need for critical thinking alongside AI use. He views AI as a powerful tool that can empower lawmakers, rather than replace them, provided that it is implemented with caution.

Despite the promise of AI, the risk of undermining legislative legitimacy looms large. Schmitz warns that foreign entities could exploit public consultation processes by inundating government offices with fabricated or insincere submissions, effectively executing a legislative denial of service. This could further erode trust in governmental processes; surveys indicate that 11 out of 28 countries assessed in Edelman’s annual trust barometer find governments more distrusted than trusted. In the UK, a recent survey showed that only 29% of citizens trust their government to use AI accurately and fairly.

While tools like Consult can assist in sifting through public input, their integration into decision-making processes raises transparency concerns. Ruth Fox, director of the Hansard Society, points out that human oversight remains essential to validate AI outputs, warning that reliance on automated tools could lead to critical inaccuracies going unchecked. “You still need human eyes and a human brain to check that the themes and sentiments it produces are actually accurate,” Fox asserts.

Furthermore, Joanna Bryson, an AI ethicist at the Hertie School, cautions against the fragility of AI models. Changes in model performance, outages, and vendor leverage can disrupt decision-making processes if AI is embedded too deeply in the legislative framework. “The goal should be systems that can be audited and owned — someone that can be held accountable who did what, when,” she suggests.

In the United States, the federal government is also stepping into the AI arena, planning to utilize Google’s Gemini to facilitate the deregulation process. However, this approach has sparked concerns about the potential legal ramifications of AI-driven regulation. Philip Wallach, a senior fellow at the American Enterprise Institute, warns that relying heavily on AI could lead to significant procedural oversights, exposing the government to legal challenges.

As countries navigate the complexities of integrating AI into their legislative frameworks, the challenge lies in balancing efficiency with accountability. Experts caution against viewing AI as merely a technological fix, suggesting that a systemic approach is essential to maintain public trust. Schmitz concludes, “There’s a huge opportunity to win back a lot of trust if designed for. But it could also be really, really bad for trust if it’s not.”

See also
Staff
Written By

The AiPressa Staff team brings you comprehensive coverage of the artificial intelligence industry, including breaking news, research developments, business trends, and policy updates. Our mission is to keep you informed about the rapidly evolving world of AI technology.

You May Also Like

AI Finance

99% of UK financial firms now leverage AI, driving a transformative shift in banking with 59% reporting productivity gains and a surge in security...

AI Generative

UK government teams up with Microsoft to establish a deepfake detection framework, enhancing safety against synthetic media misuse amid rising public concerns.

Top Stories

Elon Musk's xAI faces UK investigation as Grok generates 99% non-consensual sexualized images of women, raising critical online safety concerns.

Top Stories

India is set to host the first major international AI summit in the Global South, with over 35,000 registrations and 150,000 expected visitors, showcasing...

Top Stories

UK deal values surged 12% to £131 billion in 2025, driven by a 28% rise in average deal size amid heightened demand for AI...

AI Regulation

Cynomi expands its NIS 2 coverage for MSPs in Croatia and Belgium, addressing rising demand for AI governance and compliance amid stringent regulatory landscapes.

AI Government

UK Government reports 76% progress on its AI Opportunities Action Plan, achieving 38 out of 50 commitments as it aims for a £550 billion...

AI Education

Adobe partners with the UK Government to launch the Tech Towns initiative in Barnsley, aiming to equip 30 million learners with AI and digital...

© 2025 AIPressa · Part of Buzzora Media · All rights reserved. This website provides general news and educational content for informational purposes only. While we strive for accuracy, we do not guarantee the completeness or reliability of the information presented. The content should not be considered professional advice of any kind. Readers are encouraged to verify facts and consult appropriate experts when needed. We are not responsible for any loss or inconvenience resulting from the use of information on this site. Some images used on this website are generated with artificial intelligence and are illustrative in nature. They may not accurately represent the products, people, or events described in the articles.