Connect with us

Hi, what are you looking for?

AI Education

Contract Cheating Declines 43% as AI Misuse Among Students Surges 219% in 2024

Contract cheating among students has plummeted 43% as generative AI misuse surges 219%, prompting universities to rethink academic integrity policies.

Sneaking into lectures, infiltrating group chats, and impersonating professors are some of the desperate tactics employed by contract cheating companies as they seek to regain lost market share amid the rise of generative AI. Universities across Australia report that traditional contract cheating—where students outsource assessments—has diminished as misuse of AI tools has surged. This uptick has led to a notable increase in the number of students caught violating academic integrity policies.

Since the emergence of AI technologies in 2023, many universities initially imposed strict bans on their use. However, these restrictions have gradually loosened, with most institutions now adopting a “two-lane” approach to AI utilization, allowing its use in certain contexts while prohibiting it in others. This model, which began at the University of Sydney, has been implemented at several other educational establishments.

Professor Phillip Dawson from Deakin University’s Centre for Research in Assessment and Digital Learning noted that there is still a niche market for bespoke contract cheating, which may involve a more personal relationship with the provider of academic materials. He stated, “There are still people who outsource the entirety of the degree. In courses with a face-to-face component, you need a warm body in the room.”

Kane Murdoch, head of complaints, appeals, and misconduct at Macquarie University, emphasized that institutions must adapt to the evolving academic landscape, warning that without significant changes to assessment methods, cheating will become increasingly prevalent, leading to a decline in genuine learning experiences.

A report from the University of New South Wales (UNSW) revealed a staggering 219 percent increase in “unauthorized use” of generative AI in 2024 compared to the previous year. In contrast, the university recorded no such reports in 2022. At UNSW, verified instances of contract cheating fell from 232 in 2023 to 132 in 2024, indicating a shift from traditional cheating methods to AI misuse.

The financial performance of **Chegg**, a study help platform, mirrors these trends. After reaching a pandemic peak of $113.51 per share, Chegg’s stock has plummeted to just 69 cents. The company laid off 45 percent of its workforce late last year and has initiated legal action against **Google**, alleging that AI summaries on Google’s homepage have negatively impacted its website traffic. Professor Dawson remarked, “There’s a market signal in the share price,” pointing out that Chegg’s decline correlates with institutions shifting away from online learning toward face-to-face interactions and the rise of AI technologies.

Murdoch bluntly declared that “Chegg is dead,” highlighting the company’s struggles in the current educational climate. Compounding its challenges, Chegg faces a lawsuit from the **Tertiary Education Quality and Standards Agency (TEQSA)** for allegedly breaching federal anti-cheating laws. Court documents filed in September detail TEQSA’s claims that Chegg and its subsidiary, Chegg India, violated laws against academic cheating services through its “Expert Q&A service.”

TEQSA asserts that it has identified five instances of Australian university assignments in various fields—including programming, water systems, and quantum mechanics—being submitted to Chegg’s platform, with responses from its experts appearing within days. The agency’s filings suggest that Chegg’s management either knew or should have known that these submissions were likely student assignments.

In response to the allegations, Chegg representatives have denied providing any form of academic cheating service. The company argues that TEQSA’s claims are based on a limited selection of examples that do not accurately represent its commitment to academic integrity. A spokesperson stated, “The lawsuit brought by TEQSA is based on an outdated academic integrity policy, which was formulated long before the rise of AI and its profound impact on education and technology today.”

As educational institutions continue to navigate the complexities introduced by AI technologies, the landscape of academic integrity is poised for further transformation. The ongoing discourse surrounding AI’s role in education will likely shape future policies and approaches to assessment, raising critical questions about the balance between innovation and integrity in higher learning.

See also
David Park
Written By

At AIPressa, my work focuses on discovering how artificial intelligence is transforming the way we learn and teach. I've covered everything from adaptive learning platforms to the debate over ethical AI use in classrooms and universities. My approach: balancing enthusiasm for educational innovation with legitimate concerns about equity and access. When I'm not writing about EdTech, I'm probably exploring new AI tools for educators or reflecting on how technology can truly democratize knowledge without leaving anyone behind.

You May Also Like

AI Regulation

IMAA updates its AI Guiding Principles for ethical use, introducing new targeting guidelines to enhance accountability and transparency in media and marketing.

AI Government

Australia's Productivity Commission calls for a three-year wait on AI copyright laws, advocating data flexibility while revising the Privacy Act for better outcomes.

Top Stories

Wisconsin universities unveil groundbreaking AI innovations, securing $50 million in funding for collaborative research that promises to revolutionize technology applications.

AI Regulation

Okta's survey reveals that while 70% of boards in APJ acknowledge AI security risks, fewer than 10% have IAM systems fully equipped to manage...

AI Research

NINEby9's report reveals that women hold just 24.4% of managerial roles in AI, highlighting a widening gender gap in APAC amid rising tech adoption.

Top Stories

Bitdefender reveals that cybercriminals exploit Hugging Face to distribute the TrustBastion Android Trojan, compromising user credentials and device access.

AI Regulation

Compliance teams at banks in Singapore, Malaysia, and Australia report 66% heavy manual workloads, with only 34% implementing AI despite 54% exploring solutions.

AI Regulation

Informatica reveals 62% of Australian organizations have adopted generative AI, yet 92% face critical data reliability challenges hindering scalable implementation

© 2025 AIPressa · Part of Buzzora Media · All rights reserved. This website provides general news and educational content for informational purposes only. While we strive for accuracy, we do not guarantee the completeness or reliability of the information presented. The content should not be considered professional advice of any kind. Readers are encouraged to verify facts and consult appropriate experts when needed. We are not responsible for any loss or inconvenience resulting from the use of information on this site. Some images used on this website are generated with artificial intelligence and are illustrative in nature. They may not accurately represent the products, people, or events described in the articles.