A senior partner at KPMG Australia has been fined A$10,000 (approximately US$7,000) for using generative AI tools to cheat on an internal training assessment focused on the responsible and ethical application of technology. The incident occurred in July 2025 when the partner, a registered company auditor, uploaded a training manual to an external AI platform to generate answers for a mandatory assessment, as reported by the Financial Times.
This case is part of a broader issue at KPMG Australia, where 28 instances of AI-related cheating have been identified since July, according to the Aussie Corporate. While most incidents involved staff at the managerial level or lower, the involvement of a partner has raised significant concerns within the firm.
As a registered company auditor, partners at KPMG are held to higher standards due to their crucial role in safeguarding clients’ financial data. According to the Australian Financial Review, partners are required to download a reference manual as part of their training for the ethical use of AI. The breached partner violated company policy by submitting this reference material to an AI tool to obtain answers.
The breach was detected in August 2025 through KPMG’s internal AI monitoring systems. In response to the misconduct, KPMG has enhanced its processes and oversight to identify AI cheating, following a period of widespread issues with internal tests between 2016 and 2020. After conducting an internal investigation, KPMG imposed a fine exceeding A$10,000 on the partner in terms of future income and required the individual to retake the exam. The partner has since self-reported the incident to Chartered Accountants Australia and New Zealand, which has initiated its own investigation.
KPMG Australia’s chief executive, Andrew Yates, acknowledged the difficulties the firm faces due to the rapid adoption of AI technology, especially in internal training and testing environments. “It’s a very hard thing to get on top of, given how quickly society has embraced it,” Yates told the Australian Financial Review. He noted that as soon as KPMG implemented monitoring for AI use in internal tests in 2024, they began to uncover instances of policy violations. Following this, the firm launched a comprehensive educational campaign and has continued to deploy new technologies to restrict AI access during assessments.
KPMG is taking steps to establish a new standard of transparency by pledging to report AI-related cheating in its annual results. The firm aims to ensure that staff self-report any misconduct to relevant professional bodies, indicating a growing recognition of the challenges posed by AI in the accounting sector.
The implications of this case extend beyond KPMG, shedding light on the broader issues of ethical AI usage and accountability in professional environments. As firms increasingly integrate AI tools into their operations, the need for robust training, policy enforcement, and ethical standards becomes ever more critical.
See also
Alibaba and ByteDance Launch Qwen-Image-2.0 and Seedream 5.0, Transforming AI Image Generation
LSEG Launches Blockchain Depository, MegaETH Testnet Unveils High-Performance Capabilities
ValleyNXT Ventures Unveils ₹400 Crore Bharat Breakthrough Fund for AI and Defence Startups
Google Hosts 200 SENCOs at SEND Symposium to Enhance AI in Inclusive Education
Germany”s National Team Prepares for World Cup Qualifiers with Disco Atmosphere





















































