Connect with us

Hi, what are you looking for?

Top Stories

KPMG Australia Partner Fined A$10,000 for Cheating on AI Ethics Test Using Generative AI

KPMG Australia fined a partner A$10,000 for cheating on an AI ethics test by using generative AI, amid 28 reported AI-related cheating incidents since July.

A senior partner at KPMG Australia has been fined A$10,000 (approximately US$7,000) for using generative AI tools to cheat on an internal training assessment focused on the responsible and ethical application of technology. The incident occurred in July 2025 when the partner, a registered company auditor, uploaded a training manual to an external AI platform to generate answers for a mandatory assessment, as reported by the Financial Times.

This case is part of a broader issue at KPMG Australia, where 28 instances of AI-related cheating have been identified since July, according to the Aussie Corporate. While most incidents involved staff at the managerial level or lower, the involvement of a partner has raised significant concerns within the firm.

As a registered company auditor, partners at KPMG are held to higher standards due to their crucial role in safeguarding clients’ financial data. According to the Australian Financial Review, partners are required to download a reference manual as part of their training for the ethical use of AI. The breached partner violated company policy by submitting this reference material to an AI tool to obtain answers.

The breach was detected in August 2025 through KPMG’s internal AI monitoring systems. In response to the misconduct, KPMG has enhanced its processes and oversight to identify AI cheating, following a period of widespread issues with internal tests between 2016 and 2020. After conducting an internal investigation, KPMG imposed a fine exceeding A$10,000 on the partner in terms of future income and required the individual to retake the exam. The partner has since self-reported the incident to Chartered Accountants Australia and New Zealand, which has initiated its own investigation.

KPMG Australia’s chief executive, Andrew Yates, acknowledged the difficulties the firm faces due to the rapid adoption of AI technology, especially in internal training and testing environments. “It’s a very hard thing to get on top of, given how quickly society has embraced it,” Yates told the Australian Financial Review. He noted that as soon as KPMG implemented monitoring for AI use in internal tests in 2024, they began to uncover instances of policy violations. Following this, the firm launched a comprehensive educational campaign and has continued to deploy new technologies to restrict AI access during assessments.

KPMG is taking steps to establish a new standard of transparency by pledging to report AI-related cheating in its annual results. The firm aims to ensure that staff self-report any misconduct to relevant professional bodies, indicating a growing recognition of the challenges posed by AI in the accounting sector.

The implications of this case extend beyond KPMG, shedding light on the broader issues of ethical AI usage and accountability in professional environments. As firms increasingly integrate AI tools into their operations, the need for robust training, policy enforcement, and ethical standards becomes ever more critical.

See also
Staff
Written By

The AiPressa Staff team brings you comprehensive coverage of the artificial intelligence industry, including breaking news, research developments, business trends, and policy updates. Our mission is to keep you informed about the rapidly evolving world of AI technology.

You May Also Like

Top Stories

OpenAI acquires TBPN podcast for a reported low hundreds of millions, tapping into $30M advertising revenue potential and bolstering AI discourse.

AI Technology

Memory chip stocks plummet by $100 billion this week as Micron suffers a 15% drop, signaling a shift in AI hardware demand dynamics.

Top Stories

OpenAI halts its adult-themed chatbot initiative amid 14 lawsuits and investor pushback, signaling a major shift in AI's sensitive engagement landscape.

AI Technology

Siemens CEO Roland Busch warns that the EU's tech sovereignty initiative could delay AI innovation, urging against prioritizing local systems over U.S. technology.

Top Stories

Mistral CEO Arthur Mensch proposes a revenue-based content levy for AI firms in Europe to fund creative sectors and enhance legal protections for copyright...

Top Stories

Mistral AI proposes a revenue-based levy system for AI training data in Europe, aiming to level the playing field and support local content creation.

Top Stories

Mistral CEO Arthur Mensch proposes a 1-1.5% revenue levy on AI firms in Europe to fund cultural industries amid rising copyright challenges and legal...

Top Stories

Microsoft considers legal action over Amazon's $50 billion cloud deal with OpenAI, raising stakes in the fierce AI competition and cloud dominance battle.

© 2025 AIPressa · Part of Buzzora Media · All rights reserved. This website provides general news and educational content for informational purposes only. While we strive for accuracy, we do not guarantee the completeness or reliability of the information presented. The content should not be considered professional advice of any kind. Readers are encouraged to verify facts and consult appropriate experts when needed. We are not responsible for any loss or inconvenience resulting from the use of information on this site. Some images used on this website are generated with artificial intelligence and are illustrative in nature. They may not accurately represent the products, people, or events described in the articles.