British officials have begun leveraging artificial intelligence to expedite the evaluation of public consultation responses as part of a major overhaul of the country’s water sector. Faced with over 50,000 submissions, the UK government employed an in-house AI tool named Consult, which is part of the “Humphrey” suite. This tool reportedly sorted through the responses into thematic categories in approximately two hours for just £240, followed by 22 hours of expert checks. Scaling this method across government departments could save up to 75,000 days of manual analysis annually.
A government spokesperson emphasized that “AI has the potential to transform how government works — saving time on routine administrative tasks and freeing up civil servants to focus on what matters most: delivering better public services for the British people.” The spokesperson also noted that guidelines and audits are in place to ensure the responsible use of AI across governmental functions.
Experts, however, caution that while AI can streamline administrative processes, there are risks involved. Chris Schmitz, a researcher at the Hertie School in Berlin, highlights the potential for public participation to be undermined if not properly managed. “Government services are easily flooded with requests,” he explains, emphasizing the importance of safeguards to ensure genuine public engagement is not diluted by overwhelming amounts of input that may not represent true citizen sentiment.
Globally, the trend of using AI to facilitate legislative processes is gaining traction. The Italian Senate has already implemented AI to manage the complexities of numerous amendments, identifying overlaps and potential attempts to stall discussions. Recently, the European Commission also released a tender for multilingual chatbots aimed at helping users navigate legal obligations under the EU AI and Digital Services Acts.
In Brazil, the Chamber of Deputies is enhancing its Ulysses program to analyze legislative materials and is integrating external AI platforms like Claude, Gemini, and GPT, while ensuring strong security and transparency. Likewise, New Zealand’s Parliamentary Counsel Office is exploring AI for drafting initial explanatory notes on proposed legislation, with particular attention to data sovereignty concerns. Meanwhile, Estonia’s Prime Minister Kristen Michal has advocated for AI’s usage in legislative reviews to prevent errors, noting past mistakes that significantly impacted tax revenues.
“This could be a smart and practical solution,” Michal stated, underscoring the need for critical thinking alongside AI use. He views AI as a powerful tool that can empower lawmakers, rather than replace them, provided that it is implemented with caution.
Despite the promise of AI, the risk of undermining legislative legitimacy looms large. Schmitz warns that foreign entities could exploit public consultation processes by inundating government offices with fabricated or insincere submissions, effectively executing a legislative denial of service. This could further erode trust in governmental processes; surveys indicate that 11 out of 28 countries assessed in Edelman’s annual trust barometer find governments more distrusted than trusted. In the UK, a recent survey showed that only 29% of citizens trust their government to use AI accurately and fairly.
While tools like Consult can assist in sifting through public input, their integration into decision-making processes raises transparency concerns. Ruth Fox, director of the Hansard Society, points out that human oversight remains essential to validate AI outputs, warning that reliance on automated tools could lead to critical inaccuracies going unchecked. “You still need human eyes and a human brain to check that the themes and sentiments it produces are actually accurate,” Fox asserts.
Furthermore, Joanna Bryson, an AI ethicist at the Hertie School, cautions against the fragility of AI models. Changes in model performance, outages, and vendor leverage can disrupt decision-making processes if AI is embedded too deeply in the legislative framework. “The goal should be systems that can be audited and owned — someone that can be held accountable who did what, when,” she suggests.
In the United States, the federal government is also stepping into the AI arena, planning to utilize Google’s Gemini to facilitate the deregulation process. However, this approach has sparked concerns about the potential legal ramifications of AI-driven regulation. Philip Wallach, a senior fellow at the American Enterprise Institute, warns that relying heavily on AI could lead to significant procedural oversights, exposing the government to legal challenges.
As countries navigate the complexities of integrating AI into their legislative frameworks, the challenge lies in balancing efficiency with accountability. Experts caution against viewing AI as merely a technological fix, suggesting that a systemic approach is essential to maintain public trust. Schmitz concludes, “There’s a huge opportunity to win back a lot of trust if designed for. But it could also be really, really bad for trust if it’s not.”
See also
AI Technology Enhances Road Safety in U.S. Cities
China Enforces New Rules Mandating Labeling of AI-Generated Content Starting Next Year
AI-Generated Video of Indian Army Official Criticizing Modi’s Policies Debunked as Fake
JobSphere Launches AI Career Assistant, Reducing Costs by 89% with Multilingual Support
Australia Mandates AI Training for 185,000 Public Servants to Enhance Service Delivery





















































