Connect with us

Hi, what are you looking for?

AI Generative

Study Links Generative AI Use in Classrooms to Declines in Critical Thinking Skills

Study reveals that frequent use of generative AI tools like ChatGPT correlates with a 20% decline in critical thinking skills among younger students.

As universities across the globe integrate generative AI tools like ChatGPT into their curricula, a critical conversation is emerging regarding the impact of such technologies on student learning. While many instructors assume students are already using these tools, some have started mandating their use in coursework. However, this shift raises an important question: Are educators inadvertently hindering students’ critical thinking skills by requiring reliance on machines?

A growing body of research indicates potential harm. A 2025 study by Andreas Gerlich established a clear negative correlation between frequent use of generative AI and critical thinking abilities, particularly among younger users who may lack foundational reasoning skills. The adverse effects of AI dependency were notably less pronounced among students with more advanced education, suggesting that those still developing their cognitive capabilities are particularly vulnerable.

Professional organizations are echoing these concerns. The Institute of Electrical and Electronics Engineers Computer Society (IEEE) has highlighted the risks associated with AI-driven cognitive offloading, which refers to the outsourcing of mental tasks to machines. This trend poses significant challenges for educators tasked with developing students’ analytical capabilities, as reliance on AI tools may prevent students from engaging in the very reasoning processes they need to learn.

Critical thinking is an active, not passive, endeavor. When students turn to generative AI for tasks like dissecting arguments or evaluating evidence, they potentially forfeit the opportunity to engage in independent reasoning. The risk lies in their tendency to equate AI-generated outputs with genuine understanding, leading to a form of cognitive borrowing that may dilute their learning experience.

This situation presents an ethical dilemma for educators. Professors have a duty of care to their students and are responsible for the foreseeable outcomes of their teaching methods. If emerging evidence suggests that mandating the use of generative AI tools undermines critical thinking—particularly among students in need of these skills—the requirement could inadvertently cause more harm than good.

Consider the analogy of a foreign language class, where requiring students to utilize Google Translate for assignments would defeat the purpose of learning the language itself. Similarly, AI chatbots function as translation engines for reasoning, converting prompts into arguments without the requisite cognitive work that fosters true comprehension. By prioritizing convenience over cognitive engagement, students may lose out on the intellectual rigor necessary for developing logical reasoning.

Proponents of mandated AI use often argue it promotes equity, ensuring that all students become proficient with tools essential for the future workforce. However, this perspective overlooks a crucial reality: students who are still building their academic skills are at the greatest risk of becoming overly reliant on AI. Gerlich’s findings affirm this concern, suggesting that making generative AI compulsory could exacerbate existing disparities rather than equalize them. Students with weaker skills may be encouraged to delegate their thinking to chatbots instead of enhancing their own capabilities.

Beyond cognitive offloading, informed consent is another critical consideration. Students must understand that generative AI tools, designed to mimic human reasoning, can subtly alter their cognitive habits. If educators require the use of these systems, they owe it to their students to provide a comprehensive overview of associated risks.

Importantly, this is not a call for a blanket ban on generative AI. Students are likely to use these tools regardless of classroom policies. Instead, educators can create assignments that prioritize the process of reasoning, employing methods such as oral defenses, argument maps, and evidence-tracing tasks that make critical thinking visible and assessable.

Additionally, implementing “offloading audits” before assigning academic work could help identify potential pitfalls. Questions such as whether tasks require traceable reasoning steps, if AI-generated responses could pass for deeper understanding, and whether there are alternative pathways to demonstrate competence can guide assignment design. If such criteria are not met, educators should consider redesigning the task.

Ultimately, professors must continually assess whether tasks necessitate independent student performance. In courses focused on critical thinking, the answer is often yes. Mandating AI use in these contexts may therefore be counterproductive. Just as individuals do not improve their physical strength by allowing machines to lift weights for them, students will not enhance their thinking skills by relying on chatbots for cognitive tasks.

The mission of higher education is not to chase after technological trends but to cultivate intellectual habits that endure beyond the lifespan of current tools. As evidence mounts suggesting that requiring generative AI use may do more harm than good, educators should embrace the guiding principle of responsible teaching: First, do no harm.

Moti Mizrahi, Ph.D. is a professor of Philosophy of Science and Technology at the Florida Institute of Technology in Melbourne, Florida.

See also
Staff
Written By

The AiPressa Staff team brings you comprehensive coverage of the artificial intelligence industry, including breaking news, research developments, business trends, and policy updates. Our mission is to keep you informed about the rapidly evolving world of AI technology.

You May Also Like

AI Business

Red Hat advances enterprise AI with Small Language Models that achieve over 98% validity in structured tasks, prioritizing reliability and data sovereignty.

AI Research

OpenAI's o1 model achieves 81.6% diagnostic accuracy in emergency situations, surpassing human doctors and signaling a major shift in medical practice.

AI Marketing

BusySeed unveils Rankxa, a tool tracking brand visibility across AI-generated responses, revealing 90% of brands lack meaningful presence in this new landscape.

AI Regulation

Korea Venture Investment Corp. unveils AI-driven fund management systems by integrating Nvidia H200 GPUs to enhance efficiency and support unicorn growth.

AI Technology

Apple raises Mac mini starting price to $799 amid AI-driven inventory shortages, eliminating the $599 model in response to surging demand for advanced computing.

AI Research

IBM launches a Chicago Quantum Hub to create 750 AI jobs and expands its MIT partnership to advance quantum computing and AI integration.

AI Government

71% of Australian employees use generative AI daily, but only 36% trust its implementation, highlighting urgent calls for better policy frameworks and safeguards.

AI Technology

A1 Public Relations helps entertainment brands enhance AI visibility in 2026 by integrating structured content and fresh, authoritative media, ensuring they are recognized by...

© 2025 AIPressa · Part of Buzzora Media · All rights reserved. This website provides general news and educational content for informational purposes only. While we strive for accuracy, we do not guarantee the completeness or reliability of the information presented. The content should not be considered professional advice of any kind. Readers are encouraged to verify facts and consult appropriate experts when needed. We are not responsible for any loss or inconvenience resulting from the use of information on this site. Some images used on this website are generated with artificial intelligence and are illustrative in nature. They may not accurately represent the products, people, or events described in the articles.