Connect with us

Hi, what are you looking for?

AI Generative

Study Links Generative AI Use in Classrooms to Declines in Critical Thinking Skills

Study reveals that frequent use of generative AI tools like ChatGPT correlates with a 20% decline in critical thinking skills among younger students.

As universities across the globe integrate generative AI tools like ChatGPT into their curricula, a critical conversation is emerging regarding the impact of such technologies on student learning. While many instructors assume students are already using these tools, some have started mandating their use in coursework. However, this shift raises an important question: Are educators inadvertently hindering students’ critical thinking skills by requiring reliance on machines?

A growing body of research indicates potential harm. A 2025 study by Andreas Gerlich established a clear negative correlation between frequent use of generative AI and critical thinking abilities, particularly among younger users who may lack foundational reasoning skills. The adverse effects of AI dependency were notably less pronounced among students with more advanced education, suggesting that those still developing their cognitive capabilities are particularly vulnerable.

Professional organizations are echoing these concerns. The Institute of Electrical and Electronics Engineers Computer Society (IEEE) has highlighted the risks associated with AI-driven cognitive offloading, which refers to the outsourcing of mental tasks to machines. This trend poses significant challenges for educators tasked with developing students’ analytical capabilities, as reliance on AI tools may prevent students from engaging in the very reasoning processes they need to learn.

Critical thinking is an active, not passive, endeavor. When students turn to generative AI for tasks like dissecting arguments or evaluating evidence, they potentially forfeit the opportunity to engage in independent reasoning. The risk lies in their tendency to equate AI-generated outputs with genuine understanding, leading to a form of cognitive borrowing that may dilute their learning experience.

This situation presents an ethical dilemma for educators. Professors have a duty of care to their students and are responsible for the foreseeable outcomes of their teaching methods. If emerging evidence suggests that mandating the use of generative AI tools undermines critical thinking—particularly among students in need of these skills—the requirement could inadvertently cause more harm than good.

Consider the analogy of a foreign language class, where requiring students to utilize Google Translate for assignments would defeat the purpose of learning the language itself. Similarly, AI chatbots function as translation engines for reasoning, converting prompts into arguments without the requisite cognitive work that fosters true comprehension. By prioritizing convenience over cognitive engagement, students may lose out on the intellectual rigor necessary for developing logical reasoning.

Proponents of mandated AI use often argue it promotes equity, ensuring that all students become proficient with tools essential for the future workforce. However, this perspective overlooks a crucial reality: students who are still building their academic skills are at the greatest risk of becoming overly reliant on AI. Gerlich’s findings affirm this concern, suggesting that making generative AI compulsory could exacerbate existing disparities rather than equalize them. Students with weaker skills may be encouraged to delegate their thinking to chatbots instead of enhancing their own capabilities.

Beyond cognitive offloading, informed consent is another critical consideration. Students must understand that generative AI tools, designed to mimic human reasoning, can subtly alter their cognitive habits. If educators require the use of these systems, they owe it to their students to provide a comprehensive overview of associated risks.

Importantly, this is not a call for a blanket ban on generative AI. Students are likely to use these tools regardless of classroom policies. Instead, educators can create assignments that prioritize the process of reasoning, employing methods such as oral defenses, argument maps, and evidence-tracing tasks that make critical thinking visible and assessable.

Additionally, implementing “offloading audits” before assigning academic work could help identify potential pitfalls. Questions such as whether tasks require traceable reasoning steps, if AI-generated responses could pass for deeper understanding, and whether there are alternative pathways to demonstrate competence can guide assignment design. If such criteria are not met, educators should consider redesigning the task.

Ultimately, professors must continually assess whether tasks necessitate independent student performance. In courses focused on critical thinking, the answer is often yes. Mandating AI use in these contexts may therefore be counterproductive. Just as individuals do not improve their physical strength by allowing machines to lift weights for them, students will not enhance their thinking skills by relying on chatbots for cognitive tasks.

The mission of higher education is not to chase after technological trends but to cultivate intellectual habits that endure beyond the lifespan of current tools. As evidence mounts suggesting that requiring generative AI use may do more harm than good, educators should embrace the guiding principle of responsible teaching: First, do no harm.

Moti Mizrahi, Ph.D. is a professor of Philosophy of Science and Technology at the Florida Institute of Technology in Melbourne, Florida.

See also
Staff
Written By

The AiPressa Staff team brings you comprehensive coverage of the artificial intelligence industry, including breaking news, research developments, business trends, and policy updates. Our mission is to keep you informed about the rapidly evolving world of AI technology.

You May Also Like

AI Government

UK government fails to initiate any trials with OpenAI's ChatGPT eight months post-agreement, raising concerns over accountability and public benefit.

AI Education

University of Phoenix study finds generative AI tools enhance doctoral research efficiency while emphasizing the urgent need for ethical guidelines in academia

AI Marketing

ViralBulls introduces Generative Engine Optimization, a cutting-edge strategy designed to enhance AI-driven marketing and boost brand visibility in evolving search landscapes.

AI Regulation

Trump administration unveils a seven-point AI regulation framework prioritizing minimal federal oversight and enhanced protections for minors while urging Congress to prevent conflicting state...

AI Generative

LLM Development Services empower businesses to craft tailored AI solutions, enhancing productivity and ROI while moving past the limitations of generic models.

AI Technology

Recursion Pharmaceuticals synthesizes 330 drug compounds in 17 months, attracting $213M from Roche, reshaping the $4.7 trillion healthcare landscape.

AI Cybersecurity

Experts warn that AI-driven cyberattacks could compromise satellite systems, risking catastrophic impacts on GPS, banking, and military operations within two years.

Top Stories

Runway and Nvidia unveil a groundbreaking real-time AI video model generating its first frame in under 100 milliseconds, revolutionizing digital content creation.

© 2025 AIPressa · Part of Buzzora Media · All rights reserved. This website provides general news and educational content for informational purposes only. While we strive for accuracy, we do not guarantee the completeness or reliability of the information presented. The content should not be considered professional advice of any kind. Readers are encouraged to verify facts and consult appropriate experts when needed. We are not responsible for any loss or inconvenience resulting from the use of information on this site. Some images used on this website are generated with artificial intelligence and are illustrative in nature. They may not accurately represent the products, people, or events described in the articles.