A recent survey of 193 leaders in compliance, ethics, risk, and audit revealed that 90 percent of organizations are utilizing generative AI tools, such as ChatGPT and Claude. Among the respondents, 52 percent reported using agentic AI to perform tasks, while 51 percent are employing large language models and 42 percent are leveraging predictive analytics or machine learning tools. The findings indicate a significant shift towards integrating AI into compliance operations.
Vincent Walden, CEO of konaAI, attributed the lack of robust governance in AI deployment to the relatively nascent nature of generative AI, which has only been mainstream for around two and a half years. “Compliance departments are drawn to AI’s capabilities due to its ability to automate many of the manual processes required in compliance, such as due diligence,” he noted.
Walden characterized the current landscape as an “exciting time to be disrupted with AI,” emphasizing that businesses are increasingly using AI to streamline labor-intensive compliance tasks. The survey respondents, approximately 30 percent of whom operate in financial services, represented various organization sizes and industries. Slightly over half indicated their companies had fewer than 5,000 employees, while 27 percent employed between 5,000 and 50,000, and 18 percent had over 50,000 employees.
Leaders expressed enthusiasm about the efficiencies that AI tools can bring to key compliance areas. Nearly 40 percent reported that their companies deploy AI for risk assessment and monitoring, with about 61 percent planning to expand AI usage in this area shortly. The benefits appear substantial, with 84 percent stating that AI has improved the efficiency of their departments. Additionally, 54 percent noted enhancements in analytics and monitoring, 49 percent reported improved decision-making, and 41 percent experienced cost savings.
Agentic AI, which not only automates tasks but also makes decisions, is seen as a potential game changer within compliance. Walden remarked that processes characterized by repetitive tasks involving multiple personnel are particularly suitable for agentic automation. His organization has seen success in personifying AI agents, citing one named Eva, who specializes in Department of Justice investigations. If a whistleblower reports a bribery scheme involving certain employees, Eva can swiftly identify prior warnings about inappropriate behavior and draft a work plan based on her findings.
External pressures, including economic conditions and shifting political landscapes, are also influencing the adoption of AI, according to Walden. “As political agendas change, new priorities emerge, and compliance shifts accordingly,” he explained. This adaptability is underscored by the survey results, which indicated that nearly 80 percent of organizations plan to utilize AI in tariff management, a notable increase from the approximately 20 percent currently doing so. Furthermore, about one-third of the companies employ AI for regulatory reporting, with nearly 67 percent anticipating future use in regulatory contexts.
However, the survey revealed that many compliance teams have yet to embrace AI fully. Approximately 30 percent of these teams do not currently use AI for ethics and compliance tasks. Of those employing AI, a notable 27 percent have been doing so for less than six months, while only 5 percent have utilized it for more than two years. Walden urged organizations to adopt AI, cautioning that it is only a matter of time before executives inquire about the inefficiencies of relying on human labor for mundane tasks.
Despite the widespread interest in AI, survey respondents highlighted several challenges associated with its implementation. Nearly 66 percent reported issues with data quality or access, 47 percent faced training challenges, and 46 percent identified concerns regarding data privacy and security. A lack of AI expertise was cited as a significant problem by 54 percent of leaders, complicating efforts to address these risks.
Many compliance teams are allocating substantial portions of their technology budgets to AI-related projects. About 41 percent plan to spend up to 25 percent of their budgets on AI, while 19 percent expect to allocate between 25 and 50 percent. However, some organizations are reconsidering their AI investments, particularly in training, as nearly 58 percent currently utilize AI for this purpose, but only 42 percent intend to continue doing so.
Walden cautioned that AI may not be suitable for all tasks, especially those requiring significant levels of trust, such as training and communication. “Trust is borne out of authenticity, inspiration, and being relatable,” he noted. He emphasized that a training session led by a CEO or a compliance professional is far more engaging than an AI-generated presentation.
As companies navigate the complexities of AI integration, about 63 percent currently use AI for communications, but only 37 percent plan to maintain this approach in the future. Notably, one-third of companies have developed custom AI tools, while 43 percent are utilizing third-party compliance platforms equipped with AI features.
See also
OpenAI’s Rogue AI Safeguards: Decoding the 2025 Safety Revolution
US AI Developments in 2025 Set Stage for 2026 Compliance Challenges and Strategies
Trump Drafts Executive Order to Block State AI Regulations, Centralizing Authority Under Federal Control
California Court Rules AI Misuse Heightens Lawyer’s Responsibilities in Noland Case
Policymakers Urged to Establish Comprehensive Regulations for AI in Mental Health



















































