Informatica has released a global study revealing a significant gap between the rapid adoption of generative AI among Australian organizations and the essential data skills, governance, and infrastructure needed for responsible deployment at scale. The research surveyed 600 data leaders across the US, UK, Europe, and Asia-Pacific, including Australia, finding that 62% of Australian organizations have already integrated generative AI into their business practices.
The study highlighted what Informatica refers to as foundational weaknesses. Respondents reported widespread issues related to data reliability, workforce training, governance approaches, and modernization priorities for data security and infrastructure.
Data reliability remains a crucial obstacle for Australian organizations transitioning generative AI initiatives from pilot phases to production environments. An overwhelming 92% of respondents identified data reliability as a barrier to scaling generative AI initiatives. The concern extends to AI agents as well, where 47% cited data quality and reliability as a significant challenge in deploying these systems into production.
The findings also underscored a disconnect between perceived trust and actual issues. While 75% of Australian data leaders indicated that most employees trust the data used for AI, many acknowledged that reliability concerns hinder production rollouts.
Skills gaps are another area of concern, with 77% of Australian respondents emphasizing the need for better data literacy training. AI literacy emerged as a critical priority as well, with 75% stating that employees require more training to utilize technology responsibly in daily operations.
Governance approaches among Australian organizations vary significantly. The study found that 51% extend existing data-governance tools to include AI, while 30% invest in separate AI governance tools. Nineteen percent reported starting their governance efforts from scratch. This mix of strategies indicates uneven readiness as adoption accelerates, suggesting that organizations have differing assessments of risk, accountability, and oversight for AI systems.
Interestingly, infrastructure and security modernization ranked low among immediate priorities, with only 8% of respondents identifying it as a top concern. This finding is particularly striking given the rapid pace of AI adoption, implying that foundational systems may struggle to keep up with the increasing use of AI tools in production settings.
Despite these challenges, nearly all Australian data leaders expressed intentions to bolster spending on data management, with 98% planning to increase investment in 2026. Key drivers for this expenditure include enhancing data literacy and AI fluency, strengthening privacy and security measures, improving data and AI governance, and adapting to evolving regulatory requirements.
Amanda Fitzsimmons, Senior Director of Customer Data at RS Group, commented on the risks associated with accelerating AI adoption without robust data governance and literacy. “This report highlights the significant risks of accelerating AI adoption without strong data governance and literacy. At RS Group, we address this challenge by embedding governance and accountability into how we evaluate and scale AI initiatives,” she said.
Fitzsimmons elaborated on her organization’s internal assessment process, which evaluates technological, security, legal, and strategic implications to ensure responsible innovation. “This approach helps ensure innovation moves forward responsibly, with risks understood and value clearly defined from the outset,” she added. She also emphasized the importance of investment and external collaboration to foster trusted, responsible AI.
Trust gap is another theme emerging from the study, as Informatica’s local leadership indicated a discrepancy between confidence in AI and the quality of the underlying data environment. Alex Newman, Country Manager for Australia and New Zealand at Informatica, noted, “Australia has set clear ambitions for how it wants to use AI to drive growth, productivity, and competitiveness, but our latest study points to a clear trust paradox.” He stressed the necessity of closing this trust gap, citing the government’s National AI Plan, which emphasizes capturing the benefits of AI while ensuring safety for individuals and organizations.
As organizations increasingly rely on AI, the trust in its outputs is growing at a pace that may not be matched by the required data foundations, governance, and skills. Closing this gap will be crucial for ensuring that AI delivers long-term value without introducing new risks.
See also
OpenAI’s Rogue AI Safeguards: Decoding the 2025 Safety Revolution
US AI Developments in 2025 Set Stage for 2026 Compliance Challenges and Strategies
Trump Drafts Executive Order to Block State AI Regulations, Centralizing Authority Under Federal Control
California Court Rules AI Misuse Heightens Lawyer’s Responsibilities in Noland Case
Policymakers Urged to Establish Comprehensive Regulations for AI in Mental Health

















































