As artificial intelligence (AI) increasingly permeates the financial advice sector, industry experts are urging Australian Financial Services Licensees (AFSLs) to proactively regulate their AI usage, anticipating an “AI governance reckoning” from regulators in the near future. Research from Finura Group reveals that 86 per cent of advice businesses have adopted AI technologies, primarily driven by the pressing need to reduce operating costs amid rising regulatory expenses.
During an annual review webinar, Finura Group’s joint managing director, Peter Worn, emphasized the necessity for AI users to establish robust compliance and governance frameworks. “Like all technologies, it moves faster than what our regulations and professional standards can keep up with. Technology is just more nimble,” he said, noting the government’s ongoing struggle to address issues surrounding cryptocurrencies as a parallel to the challenges posed by AI.
Worn cautioned that the industry cannot afford to wait for governmental regulation on AI usage, stating, “The tech just moves too fast.” He advocates for self-regulation among businesses, urging each AFSL to make informed decisions about how they use AI based on legal and ethical considerations.
This call for self-regulation includes the need for businesses to determine the parameters within which they will employ AI while considering client sentiments regarding its use. Worn posed two critical questions: “What am I comfortable with as a business and how I use AI, and what risk am I happy to take on behalf of my clients?”
Amid these discussions, Worn also predicted that AFSLs should be prepared for the Australian Securities and Investments Commission’s (ASIC) impending scrutiny. He suggested that the regulator is poised to act swiftly against the misuse of AI in the advice sector. “Our prediction this year is that there will be some AI enforcement activity. We believe there will be some AI missteps that will be public for our advice industry this year, and the regulator is going to come down really hard,” he warned.
Lawyer Tali Borowick of Holley Nethercote echoed Worn’s concerns, asserting that it is “only a matter of time” before ASIC takes action against licensees for data breaches or governance failures tied to AI usage. “Regulators will demand greater transparency, fairness, and risk management,” she stated, emphasizing the growing accountability for algorithmic decision-making.
As such, businesses that act promptly to align with these evolving regulatory requirements may not only mitigate risks but also secure a competitive advantage in an increasingly regulated landscape. “The message is clear: 2026 will not just be another year of innovation; it will be the year of accountability,” Borowick added.
Worn explained the importance of AI regulation by highlighting the risks associated with large language models, which can mislead users by generating false information, a phenomenon known as “hallucination.” This issue can be exacerbated by users who overestimate their capabilities, leading to what Worn describes as “peak of Mount Stupid” behavior among senior figures in organizations who may disregard the limitations of AI. “We saw a lot of peak of Mount Stupid behavior in the last 12 months,” he remarked, indicating a tendency among leaders to make unwarranted assumptions about AI’s capabilities.
The belief that AI can significantly reduce staffing costs may also be misleading, especially in industries where compliance is paramount. Worn noted that businesses will require substantial human resources to validate AI outputs, countering the assumption that AI could streamline operations without adequate oversight. “We’re probably just going to have to use those people to validate a lot of AI outputs in the future with humans in the loop,” he advised.
In addition to compliance challenges, businesses must consider cybersecurity risks and the credibility of data sources utilized by AI. Worn pointed out that many AI systems rely on data derived from client interactions or documents, which may not originate from trustworthy systems. “A lot of the data inputs are not necessarily taking from what we would call a trust of reliable systems of record,” he explained, underscoring potential integration issues.
The ease of establishing software businesses and developing simple AI applications is likely to drive further innovation in the sector. Worn cautioned that while many new companies may emerge, not all are built for long-term success. “Make no mistake, a lot of these businesses… are here for a good time or a long time,” he said, suggesting a trend where firms seek to capture market share and quickly exit.
See also
OpenAI’s Rogue AI Safeguards: Decoding the 2025 Safety Revolution
US AI Developments in 2025 Set Stage for 2026 Compliance Challenges and Strategies
Trump Drafts Executive Order to Block State AI Regulations, Centralizing Authority Under Federal Control
California Court Rules AI Misuse Heightens Lawyer’s Responsibilities in Noland Case
Policymakers Urged to Establish Comprehensive Regulations for AI in Mental Health




















































