A partner at the consultancy KPMG has been fined A$10,000 (£5,200) for using artificial intelligence to cheat during an internal training course on AI. This incident is part of a troubling trend within the firm, as more than two dozen KPMG Australia staff have reportedly been caught employing AI tools to gain an unfair advantage on internal exams since July. The consultancy utilized its own AI detection tools to uncover the cheating, according to a report by the Australian Finance Review.
The rise in AI-fuelled cheating has raised alarms within the accounting sector, especially among the big four accountancy firms. In 2021, KPMG Australia faced a significant fine of A$615,000 due to “widespread” misconduct, which involved over 1,100 partners engaging in “improper answer-sharing” on assessments designed to evaluate their skills and integrity. The current situation highlights how AI technologies are introducing new possibilities for unethical behavior in professional settings.
In December, the Association of Chartered Certified Accountants (ACCA), the UK’s largest accounting body, announced that it would require accounting students to take exams in person. The decision stemmed from increasing difficulties in preventing AI-assisted cheating in remote assessments. Helen Brand, the CEO of ACCA, described this moment as a “tipping point,” noting that the proliferation of AI tools was outpacing existing safeguards against malpractice.
As firms like KPMG and PricewaterhouseCoopers strive to incorporate AI into their operations to enhance profitability and efficiency, they face a dual challenge. While encouraging the adoption of AI among employees, they must also combat its misuse. KPMG has announced plans to assess its partners on their proficiency with AI tools during the 2026 performance reviews. Niale Cleobury, KPMG’s global AI workforce lead, emphasized the organization’s collective responsibility to integrate AI into their work processes.
The irony of using AI to cheat in an AI training course has not gone unnoticed, with some commentators on LinkedIn questioning KPMG’s approach. Iwo Szapar, the creator of a platform that ranks organizations’ “AI maturity,” suggested that the issue is more about the training methods being employed than outright cheating. “This is a training problem,” he remarked, suggesting that the consultancy should reconsider how it approaches employee education in the context of rapidly evolving technology.
KPMG has pledged to implement measures to identify AI misuse among its employees and is committed to tracking the instances of this behavior. Andrew Yates, the CEO of KPMG Australia, acknowledged the complexities that AI presents in internal training and assessment. He stated, “Like most organizations, we have been grappling with the role and use of AI as it relates to internal training and testing. It’s a very hard thing to get on top of given how quickly society has embraced it.”
Yates added that the everyday usage of these tools has led some individuals to breach the company’s policies, stressing the seriousness with which KPMG takes these violations. As the firm looks to strengthen its approach to monitoring AI usage, the unfolding situation underscores the broader challenges that the accounting industry faces in navigating the integration of AI technologies while maintaining ethical standards.
See also
AI Transforms Health Care Workflows, Elevating Patient Care and Outcomes
Tamil Nadu’s Anbil Mahesh Seeks Exemption for In-Service Teachers from TET Requirements
Top AI Note-Taking Apps of 2026: Boost Productivity with 95% Accurate Transcriptions




















































