Despite significant financial investments in artificial intelligence (AI), organizations are struggling to translate these expenditures into tangible profits due to inadequate governance and a lack of coordination among leadership. Recent findings from Grant Thornton reveal that governance or compliance barriers are the primary reason for underperforming AI projects, cited by 46% of business leaders. Insufficient training and data readiness follow at 31% and 23%, respectively. Alarmingly, 78% of leaders expressed doubt about their ability to pass an independent AI governance audit within 90 days.
Even with this awareness, 75% of organizations have approved major AI investments. However, nearly half—48%—have not established clear governance expectations for AI, and 46% have failed to integrate AI risk management into their ongoing oversight. This lack of preparedness extends to autonomous AI systems, where only 20% of organizations have tested a response plan for potential failures, despite almost three-quarters actively piloting or implementing these technologies.
A fragmented vision for AI governance appears to contribute to these challenges. The study indicates substantial disparities within the C-suite regarding readiness for AI adoption. While 39% of CIOs and CTOs believe their workforce is fully prepared, only 7% of COOs share this sentiment. Moreover, 54% of COOs expressed concerns over regulatory uncertainties associated with agentic AI, contrasting sharply with just 20% of CTOs. “When two leaders who share accountability for AI deployment disagree by this margin, the organization cannot produce a coherent account of its risk posture,” the report states.
Other research echoes these governance concerns. A recent report by Zuora found that 57% of finance and accounting decision-makers lack confidence in their current AI tools to operate effectively within existing controls. This uncertainty contributes to a wider disillusionment with AI technologies; 87% of surveyed respondents noted gaps between the promise of AI and its actual performance, with only 28% reporting measurable financial gains from their investments.
Furthermore, a study from MindBridge indicates that inadequate AI oversight may be financially detrimental. Across sectors such as retail, manufacturing, and energy, 90% of organizations reported suffering direct financial losses from undetected errors, with 62% categorizing the impact as moderate to severe. Consequently, 40% of businesses expressed serious concerns about potential risks associated with AI implementation.
Leadership may be largely unaware of the operational realities of AI within their organizations. A Grant Thornton poll identified significant perceptual divides between various organizational layers. Frontline employees and middle managers, responsible for executing leadership’s AI ambitions, identified themselves as needing the most support—67% combined—while senior leadership represents a mere 8% and 9% of support needs, respectively.
The report highlights that those closest to daily AI operations are receiving the least assistance. “Middle managers are diminishing in numbers while the workload for those remaining has accelerated rapidly,” it states. This mismatch hampers effective AI implementation. Concurrently, findings from Walkme’s State of Digital Adoption report indicate that even when AI tools are available, 28% of workers abandon them mid-task, while 37% cease using them altogether, often due to concerns about efficiency and workflow.
Moreover, the concept of “shadow AI” is prevalent, with 45% of workers utilizing unapproved AI tools, and 36% using these tools for sensitive company data. Their choices are informed by dissatisfaction with existing solutions, as 26% stated that improved guardrails could make approved tools more effective. “They’re not asking to go rogue. They’re asking for approved tools that actually work,” the report emphasizes.
Trust in AI tools is low, with only 12% of workers expressing full confidence in these systems to understand their specific contexts. Many cited reasons for abandoning AI use, such as the tools failing to meet their expectations or providing conflicting results. Consequently, 55% trust AI for only simple, non-critical tasks, while just 9% trust it for high-impact work. In stark contrast, 61% of executives believe AI is suitable for complex tasks, marking a 52-percentage-point gap in perception.
The discrepancies extend to overall satisfaction and perceived efficacy. Only 21% of frontline workers find AI tools adequate for their tasks, compared to 88% of executives. Furthermore, just 29% of frontline workers feel that AI enhances their productivity, while 88% of executives believe it does. The gap in confidence and satisfaction between these two groups illustrates a systemic disconnect within organizations.
However, there is a silver lining. Grant Thornton noted that 75% of organizations claiming to have fully implemented AI reported high confidence in their governance processes. These organizations are more likely to experience revenue growth, accelerated innovation, increased efficiency, and improved quality outputs, demonstrating that a cohesive strategy can yield positive outcomes. “Fully integrated organizations are outperforming across the board,” the report concludes, highlighting the importance of robust AI governance as a performance system.
See also
OpenAI’s Rogue AI Safeguards: Decoding the 2025 Safety Revolution
US AI Developments in 2025 Set Stage for 2026 Compliance Challenges and Strategies
Trump Drafts Executive Order to Block State AI Regulations, Centralizing Authority Under Federal Control
California Court Rules AI Misuse Heightens Lawyer’s Responsibilities in Noland Case
Policymakers Urged to Establish Comprehensive Regulations for AI in Mental Health




















































