Liquid partner Sam Daley joined Australian Taxation Office (ATO) second commissioner Jeremy Hirschhorn in a recent webinar to discuss the significant opportunities and risks associated with the adoption of artificial intelligence (AI) within regulatory frameworks. As Australia navigates the evolving landscape of AI, both Daley and Hirschhorn emphasize the importance of responsible AI implementation that prioritizes citizen dignity and enhances human decision-making processes.
Hirschhorn highlighted the ATO’s position as an early AI adopter, citing improvements in operational efficiency and transformational changes in how the agency interacts with the public. By flagging real-time compliance concerns, the ATO aims to foster a more engaged relationship with taxpayers, allowing them to self-audit before any human decisions are made. This approach underscores the agency’s commitment to maintaining citizen dignity and upholding human rights in the regulatory process.
During the discussion, Hirschhorn illustrated the ATO’s AI capabilities with a hypothetical scenario involving a taxpayer whose work-related expense claims raised a real-time alert. He stated, “What AI enables us to do is to pick you up as an exception in real time. Of course, we don’t know you’re wrong. You might be legitimately different from the average. But what it allows us to do, in real time, is to send something to you that says, ‘You came to our attention. You look a bit different from people who we think are like you. Why don’t you have a second look, maybe get some advice.’” This proactive approach aims to increase effectiveness while preserving the dignity of taxpayers.
AI systems that place humans at the center can significantly enhance decision-making processes that affect people’s lives. Daley noted that responsible AI adoption must extend beyond generic applications to include tailored solutions that meet legislative and community expectations regarding safety, transparency, and reliability. “At Liquid, we’re at the frontline of co-designing and developing these exact solutions,” he stated, emphasizing the importance of embedding safety at the architectural level of AI systems.
Generative AI solutions, such as large language models, hold considerable potential for transforming interactions with digital services. However, this capability also introduces risks, especially for government agencies that rely heavily on accurate information. The tendency of such models to generate misleading or overconfident responses—often referred to as “hallucination”—poses unacceptable risks in regulatory environments. Adapting AI solutions for these contexts requires careful consideration and co-design with regulatory staff to ensure the tools are effective and reliable.
Effective data governance remains a crucial aspect of AI safety, as it ensures that organizations responsibly manage customer data. Hirschhorn referred to the ATO’s data as “data entrusted to us by citizens,” emphasizing the need for robust governance policies that strip out sensitive information and adhere to “need to know” principles. He cautioned against “data hubris,” warning that overconfidence in AI could lead to misinterpretations of citizen behavior. “If one of your models is showing that a lot of people are suddenly dishonest, it may well be the problem is with the computer, not with the Australian people,” he noted.
Responsible AI adoption begins with asking the right questions about the specific work that needs to be accomplished and whether the correct AI tools are applied effectively. As organizations consider implementing AI systems, a focus on defining the purpose and scope of technology is essential. This disciplined approach can reveal opportunities for AI to improve regulatory processes while ensuring that robust guardrails are in place to protect citizen data and privacy.
Hirschhorn highlighted the ATO’s virtual assistant, Alex, as an example of the challenges involved in moving from a basic model to a more sophisticated AI that provides reliable advice. He noted, “The challenge for us is how can you make it reliable enough? If we sent it out as an ATO-branded administrative assistant, how are we confident enough that it will give good advice to people?” This calls for a cautious approach to AI implementation in regulatory contexts, ensuring that tools like Alex have clearly defined roles and limitations.
As regulatory bodies prepare for the future of AI, the lessons learned from high-stakes environments will be critical for the Australian Government’s APS AI Plan, which is set for release in November 2025. The goal is to create AI systems that empower regulated communities and the public, streamlining bureaucratic processes and ultimately enhancing human experiences. Done responsibly, AI has the potential to transform how citizens engage with regulatory systems, making compliance easier and more intuitive.
See also
OpenAI’s Rogue AI Safeguards: Decoding the 2025 Safety Revolution
US AI Developments in 2025 Set Stage for 2026 Compliance Challenges and Strategies
Trump Drafts Executive Order to Block State AI Regulations, Centralizing Authority Under Federal Control
California Court Rules AI Misuse Heightens Lawyer’s Responsibilities in Noland Case
Policymakers Urged to Establish Comprehensive Regulations for AI in Mental Health


















































