You don’t need people to trust AI, you need people to trust its implementation, an expert told a tech event.
At the Govt Cybersecurity & AI 2026 conference held in Canberra, Luke Halliday, a former chief technology officer for the Victorian Government, emphasized the importance of how artificial intelligence (AI) is integrated into various environments. “It’s how AI is implemented into the environment that matters most,” Halliday stated. He noted that a lack of understanding about where AI is utilized and how it affects decision-making can hinder public trust in the technology.
During his address, Halliday presented statistics indicating that 71% of Australian employees engage with generative AI in their daily routines, yet only 36% express a willingness to trust the tool. Moreover, he revealed that 78% of employees harbor concerns about potential negative outcomes from AI usage, while 83% would be more inclined to trust the technology if clear safeguards were established. “So have policies in place, make sure there are assurances in place,” Halliday urged, advocating for the necessity of strong policy frameworks and assurance protocols.
Halliday further elaborated on the implications of introducing new technologies into existing systems. He posed critical questions regarding the impact on different teams and systems when AI is integrated. “What are the boundaries? Who is it going to serve – and when we put this thing in the ecosystem, what are the possible consequences? And control – who is in control of this? Let’s think about that before we jump,” he urged delegates. According to Halliday, understanding the context and potential ramifications of AI deployment is crucial for fostering trust and ensuring effective service delivery.
Reiterating his main point, Halliday emphasized, “Implementation matters most of all,” highlighting that the ultimate goal is to deliver services efficiently. His insights resonate with a growing consensus among experts that transparency, accountability, and well-defined frameworks are essential for the responsible development and deployment of AI technologies.
As organizations increasingly navigate the complexities of integrating AI into their workflows, the need for robust policies and clear communication of AI’s role becomes paramount. Halliday’s address at the conference serves as a timely reminder that building public confidence in AI is not solely about the technology itself, but rather how it is managed and governed within our societal frameworks.
See also
US Defense Partners with Anthropic, OpenAI, and Tech Giants for AI-First Military Initiative
AI Technology Enhances Road Safety in U.S. Cities
China Enforces New Rules Mandating Labeling of AI-Generated Content Starting Next Year
AI-Generated Video of Indian Army Official Criticizing Modi’s Policies Debunked as Fake
JobSphere Launches AI Career Assistant, Reducing Costs by 89% with Multilingual Support





















































