Connect with us

Hi, what are you looking for?

AI Technology

Brookings Report Reveals AI’s Cognitive Risks: Students Face ‘Great Unwiring’ of Skills

Brookings report warns that AI’s rise may lead to “cognitive atrophy,” risking critical thinking skills among students as reliance on tools like ChatGPT grows.

The rise of artificial intelligence (AI) has raised significant concerns among educators and experts regarding its impact on student learning and cognitive development. A report released by the Brookings Institution on Wednesday highlights that while AI offers substantial benefits, it also poses a risk of “cognitive atrophy” among students, as they increasingly rely on these tools for academic tasks. The findings emerge from a yearlong study examining the profound effects of generative AI on education, underscoring fears that reliance on these technologies could lead to a significant decline in critical thinking skills.

Historically, cheating in school required effort, whether it involved coaxing an older sibling to do assignments or searching for answer keys. The advent of the internet brought some ease with resources like CliffNotes and CourseHero, but the barriers to academic dishonesty remained. Today, however, the process has been streamlined to a few clicks: students can simply log on to ChatGPT or similar platforms, paste their queries, and receive instant answers. This ease of access has sparked alarm among educators, who worry that the convenience of AI could diminish students’ intellectual engagement and learning outcomes.

The Brookings report indicates that the frictionless nature of AI contributes to a phenomenon termed “cognitive debt,” where students offload challenging cognitive tasks to AI, effectively deferring their mental effort. One teacher noted, “Students can’t reason. They can’t think. They can’t solve problems.” The study, which compiled insights from hundreds of interviews and over 400 research studies, characterizes this trend as a “great unwiring” of students’ cognitive abilities.

Fast Food of Education

AI is described in the report as the “fast food of education,” providing quick answers but ultimately lacking depth. In traditional classrooms, the struggle to synthesize information is where genuine learning occurs. By bypassing this struggle, students are missing out on the critical thinking and problem-solving that are vital in education. The researchers argue that students are incentivized by an education system that rewards the easiest paths to high grades, leading even high-achieving individuals to depend on AI tools that can enhance their performance without fostering genuine understanding.

This dependency creates a feedback loop: students utilize AI, see improvements in their grades, and subsequently become more reliant on these tools. The report reveals that many students are operating in a state referred to as “passenger mode,” physically present in school but disengaged from learning. This detachment is further exacerbated by a phenomenon known as “digitally induced amnesia,” where students fail to retain information they have not actively processed.

Research indicates that reading skills are particularly at risk, as AI’s capability to summarize complex texts diminishes students’ attention span for lengthy material. One expert noted a shift in student attitudes, stating, “Teenagers used to say, ‘I don’t like to read.’ Now it’s ‘I can’t read, it’s too long.’” In writing, the homogeneity of ideas produced by AI-generated essays contrasts sharply with the richness of human-generated work, with research showing that human essays yield significantly more unique ideas than those crafted by AI.

Despite the concerns, not all students view reliance on AI as cheating. Roy Lee, the 22-year-old CEO of AI startup Cluely, faced suspension from Columbia University for creating a tool designed to assist software engineers during job interviews. He argues that utilizing AI is akin to using a calculator or spellcheck, asserting that every time technology enhances human capability, it incites panic.

However, researchers contend that while traditional tools facilitate cognitive offloading, AI amplifies it significantly. The report warns of the rise of “artificial intimacy,” as students spend considerable time interacting with AI-powered chatbots outside of academic settings. These bots, often designed to simulate empathy, risk eroding genuine relationships and emotional trust among peers.

The Brookings report presents a grim outlook on the potential pitfalls of AI in education but offers hope that the trajectory of its integration is not predetermined. To promote a more enriching educational experience, the authors propose a three-pillar framework: to transform classrooms to adapt to AI, to develop ethical integration frameworks for students and educators, and to establish safeguards for student privacy and emotional well-being. This proactive approach aims to mitigate the risks while harnessing the potential benefits of AI, ensuring that technology serves as a complement to human cognition rather than a replacement.

See also
Staff
Written By

The AiPressa Staff team brings you comprehensive coverage of the artificial intelligence industry, including breaking news, research developments, business trends, and policy updates. Our mission is to keep you informed about the rapidly evolving world of AI technology.

You May Also Like

AI Tools

UK police forces face criticism over AI tools like Microsoft's Copilot and predictive analytics, as £4M investment raises concerns about bias and accountability.

AI Research

MIT experts reveal that while generative AI speeds up coding by 20%, it can actually lead to a 19% increase in overall task completion...

AI Cybersecurity

ESET Ireland warns that cybercriminals are leveraging AI tools to accelerate attacks on government systems, urging firms to bolster cybersecurity measures now.

AI Business

Enterprise AI pivots from experimentation to ROI focus, with only 15% of execs reporting profit gains, as firms adopt voice AI for measurable impact...

Top Stories

AMD inks multi-year deals with Meta for 6 gigawatts of GPUs and CPUs, potentially boosting Meta's stake to 10% and reshaping AI infrastructure.

AI Research

University of Warwick study shows popular AI cancer pathology tools achieve only 80% accuracy, relying on misleading shortcuts instead of true biological signals.

AI Regulation

Nearly 50% of employees misuse AI tools at work, risking data security and compliance, prompting urgent calls for stricter governance and oversight.

AI Tools

Autodesk's March 25 webinar will showcase AI tools that can cut CAD documentation time by up to 50%, revolutionizing engineering workflows and enhancing product...

© 2025 AIPressa · Part of Buzzora Media · All rights reserved. This website provides general news and educational content for informational purposes only. While we strive for accuracy, we do not guarantee the completeness or reliability of the information presented. The content should not be considered professional advice of any kind. Readers are encouraged to verify facts and consult appropriate experts when needed. We are not responsible for any loss or inconvenience resulting from the use of information on this site. Some images used on this website are generated with artificial intelligence and are illustrative in nature. They may not accurately represent the products, people, or events described in the articles.