Finance leaders across Australia and New Zealand are increasingly grappling with the implications of using artificial intelligence (AI) tools in their workflows, often without formal policies or oversight. A recent Annexa webinar revealed that 70% of finance teams are transferring data from their systems to utilize AI tools, despite only 8% having these technologies integrated directly into their existing workflows. This phenomenon, termed “shadow AI,” poses significant risks, as finance professionals leverage consumer AI tools like ChatGPT and Claude without understanding the potential consequences for data governance.
The core issue lies in the disparity between consumer AI accounts and commercial offerings. For instance, in August 2025, Anthropic updated its terms of service, mandating users of its consumer-level accounts to opt in to model training by default. This means that data from users on free or basic plans could be retained for up to five years if they do not actively opt out. ChatGPT similarly tracks conversation history to enhance model performance unless users disable this feature. This framework raises critical concerns for finance teams, particularly as most lack clarity on how their data is being handled when processed through these unregulated channels.
When finance teams utilize tools like Claude or ChatGPT with personal accounts, they risk unknowingly contributing sensitive business data to model training. This lack of awareness underscores the need for a robust governance framework that explicitly distinguishes between consumer and commercial AI products. As finance professionals seek to enhance productivity through AI, they often overlook the foundational principles of data security and compliance that their organizations have meticulously built over the years.
Moreover, the challenges of shadow AI extend beyond consumer account usage. Connecting AI tools directly to enterprise resource planning (ERP) systems can significantly mitigate risks. For example, AI cloud ERP NetSuite has begun allowing secure connections to live data, maintaining the same permission controls that govern other users. This ensures that if a finance analyst cannot access specific records, the AI tool will not have access either. All interactions are logged, providing the necessary accountability that many organizations sorely lack in their current approaches to AI.
In contrast, analysts exporting data to personal AI accounts operate outside established governance frameworks, sharing sensitive information without any record of such interactions. This raises critical policy questions that many finance teams in ANZ have yet to address. The shift from a governed AI setup to one lacking oversight is not merely technical; it reflects deeper organizational choices regarding the use of AI in business processes.
To successfully integrate AI into finance workflows, leaders must focus on establishing a clear policy that outlines which tools are approved for use with business data and under what conditions. This includes differentiating between commercial API accounts and consumer subscriptions, as well as defining the types of data that can be processed through these tools. Regular reviews of AI platform terms are essential, particularly given the rapid changes seen in the past year.
NetSuite customers, for instance, can connect AI platforms to live ERP data without incurring additional licensing costs. The setup is largely configuration-based, aligning with existing permission controls to ensure compliance. However, careful attention is required in granting AI access, as it is off by default and cannot be assigned to administrative roles. A dedicated role with limited permissions is recommended to mitigate risks while still enabling desired workflows.
The pressing question for finance leaders is not whether their teams are using AI—evidence suggests they are—but whether such use occurs within a governed framework. Only 16% of finance teams express confidence that their data is ready for AI tools, though 35% are comfortable with AI taking autonomous actions. As finance teams navigate this evolving landscape, it is vital to ensure that the frameworks established for data governance remain intact and robust.
As organizations prepare for the AI capabilities anticipated in 2026, including embedded features within systems like NetSuite and the emergence of agentic AI tools, the importance of a well-defined governance structure cannot be overstated. The value derived from these technologies is substantial, yet it hinges on the access models that finance teams employ to harness them. Without a clear strategy, the governance frameworks that have supported finance functions for years may be inadvertently bypassed.
See also
Finance Ministry Alerts Public to Fake AI Video Featuring Adviser Salehuddin Ahmed
Bajaj Finance Launches 200K AI-Generated Ads with Bollywood Celebrities’ Digital Rights
Traders Seek Credit Protection as Oracle’s Bond Derivatives Costs Double Since September
BiyaPay Reveals Strategic Upgrade to Enhance Digital Finance Platform for Global Users
MVGX Tech Launches AI-Powered Green Supply Chain Finance System at SFF 2025















































