Connect with us

Hi, what are you looking for?

AI Government

UK Government Uses AI Tool Consult to Analyze 50,000 Submissions in 2 Hours

UK government employs AI tool Consult to analyze 50,000 public submissions in just 2 hours for £240, aiming to save 75,000 days of manual analysis annually

British officials have begun leveraging artificial intelligence to expedite the evaluation of public consultation responses as part of a major overhaul of the country’s water sector. Faced with over 50,000 submissions, the UK government employed an in-house AI tool named Consult, which is part of the “Humphrey” suite. This tool reportedly sorted through the responses into thematic categories in approximately two hours for just £240, followed by 22 hours of expert checks. Scaling this method across government departments could save up to 75,000 days of manual analysis annually.

A government spokesperson emphasized that “AI has the potential to transform how government works — saving time on routine administrative tasks and freeing up civil servants to focus on what matters most: delivering better public services for the British people.” The spokesperson also noted that guidelines and audits are in place to ensure the responsible use of AI across governmental functions.

Experts, however, caution that while AI can streamline administrative processes, there are risks involved. Chris Schmitz, a researcher at the Hertie School in Berlin, highlights the potential for public participation to be undermined if not properly managed. “Government services are easily flooded with requests,” he explains, emphasizing the importance of safeguards to ensure genuine public engagement is not diluted by overwhelming amounts of input that may not represent true citizen sentiment.

Globally, the trend of using AI to facilitate legislative processes is gaining traction. The Italian Senate has already implemented AI to manage the complexities of numerous amendments, identifying overlaps and potential attempts to stall discussions. Recently, the European Commission also released a tender for multilingual chatbots aimed at helping users navigate legal obligations under the EU AI and Digital Services Acts.

In Brazil, the Chamber of Deputies is enhancing its Ulysses program to analyze legislative materials and is integrating external AI platforms like Claude, Gemini, and GPT, while ensuring strong security and transparency. Likewise, New Zealand’s Parliamentary Counsel Office is exploring AI for drafting initial explanatory notes on proposed legislation, with particular attention to data sovereignty concerns. Meanwhile, Estonia’s Prime Minister Kristen Michal has advocated for AI’s usage in legislative reviews to prevent errors, noting past mistakes that significantly impacted tax revenues.

“This could be a smart and practical solution,” Michal stated, underscoring the need for critical thinking alongside AI use. He views AI as a powerful tool that can empower lawmakers, rather than replace them, provided that it is implemented with caution.

Despite the promise of AI, the risk of undermining legislative legitimacy looms large. Schmitz warns that foreign entities could exploit public consultation processes by inundating government offices with fabricated or insincere submissions, effectively executing a legislative denial of service. This could further erode trust in governmental processes; surveys indicate that 11 out of 28 countries assessed in Edelman’s annual trust barometer find governments more distrusted than trusted. In the UK, a recent survey showed that only 29% of citizens trust their government to use AI accurately and fairly.

While tools like Consult can assist in sifting through public input, their integration into decision-making processes raises transparency concerns. Ruth Fox, director of the Hansard Society, points out that human oversight remains essential to validate AI outputs, warning that reliance on automated tools could lead to critical inaccuracies going unchecked. “You still need human eyes and a human brain to check that the themes and sentiments it produces are actually accurate,” Fox asserts.

Furthermore, Joanna Bryson, an AI ethicist at the Hertie School, cautions against the fragility of AI models. Changes in model performance, outages, and vendor leverage can disrupt decision-making processes if AI is embedded too deeply in the legislative framework. “The goal should be systems that can be audited and owned — someone that can be held accountable who did what, when,” she suggests.

In the United States, the federal government is also stepping into the AI arena, planning to utilize Google’s Gemini to facilitate the deregulation process. However, this approach has sparked concerns about the potential legal ramifications of AI-driven regulation. Philip Wallach, a senior fellow at the American Enterprise Institute, warns that relying heavily on AI could lead to significant procedural oversights, exposing the government to legal challenges.

As countries navigate the complexities of integrating AI into their legislative frameworks, the challenge lies in balancing efficiency with accountability. Experts caution against viewing AI as merely a technological fix, suggesting that a systemic approach is essential to maintain public trust. Schmitz concludes, “There’s a huge opportunity to win back a lot of trust if designed for. But it could also be really, really bad for trust if it’s not.”

See also
Staff
Written By

The AiPressa Staff team brings you comprehensive coverage of the artificial intelligence industry, including breaking news, research developments, business trends, and policy updates. Our mission is to keep you informed about the rapidly evolving world of AI technology.

You May Also Like

Top Stories

DeepMind alumni launch 38 startups across Europe, including David Silver's $1.1B-funded Ineffable Intelligence, reshaping the AI landscape.

AI Finance

Marloo secures $3 million in funding, driving 42% customer growth for its AI-powered financial advisory app while eyeing expansion into the US market

AI Marketing

A YouGov report reveals 84% of Singaporeans demand clear labeling of AI-generated content, highlighting the urgent need for transparency to build consumer trust.

AI Government

UK government tensions escalate as DSIT projects AI datacentre energy needs at 6GW by 2030, contradicting DESNZ's forecast of under 0.6GW.

AI Education

Stanford's 2026 AI Index reveals 80% of U.S. students use AI for schoolwork, yet only 6% of teachers have clear AI policies, highlighting urgent...

AI Education

UK government invests £23 million to expand EdTech Testbeds, pushing schools to adopt AI responsibly amid rising confidence challenges among educators.

AI Regulation

AI integration in investigations raises critical UK GDPR compliance issues, necessitating robust governance frameworks to mitigate legal risks and ensure accountability.

AI Technology

NPL integrates NVIDIA Ising AI to automate quantum calibration, enhancing qubit stability and reducing operational overhead in quantum computing systems.

© 2025 AIPressa · Part of Buzzora Media · All rights reserved. This website provides general news and educational content for informational purposes only. While we strive for accuracy, we do not guarantee the completeness or reliability of the information presented. The content should not be considered professional advice of any kind. Readers are encouraged to verify facts and consult appropriate experts when needed. We are not responsible for any loss or inconvenience resulting from the use of information on this site. Some images used on this website are generated with artificial intelligence and are illustrative in nature. They may not accurately represent the products, people, or events described in the articles.