As artificial intelligence (AI) increasingly pervades classrooms, educators and researchers are grappling with its implications for teaching and learning. In a recent episode of the podcast “Your Undivided Attention,” host Daniel Barcay conversed with Rebecca Winthrop, the lead of the Center for Universal Education at the Brookings Institution, about their comprehensive report titled “A New Direction for Students in an AI World.” The report examines how schools can better integrate AI into education while addressing potential risks to students’ cognitive and social development.
Winthrop highlighted that while there is a vision of AI enhancing education—enabling personalized learning experiences akin to having an “infinitely patient tutor”—the reality is far more complex. The report identifies a dual-edged sword: while AI tools can help teachers streamline lesson planning and grading, they can also undermine the crucial trust between students and educators. In fact, both teachers and students reported significant distrust, with 50% of students feeling their teachers were using AI in ways that could diminish the authenticity of their education.
“One of the most worrisome findings was the degradation of trust in the student-teacher relationship,” Winthrop noted, explaining that students often suspect their teachers of using AI for lesson creation and grading. This lack of transparency can exacerbate students’ feelings of being unfairly scrutinized and compromise their learning experience. The report also found that students are increasingly relying on AI for homework, with some even using AI-generated content to submit assignments without detection.
The report’s findings indicate that the current trajectory of AI in education is fraught with risks that overshadow its potential benefits. Winthrop pointed out that while narrow, strategic uses of AI—like tutoring support or administrative assistance—can be beneficial, the open-ended interaction with AI can lead to cognitive stunting. “Instead of developing critical thinking skills, students risk becoming overly reliant on AI, using it as a cognitive surrogate,” she said.
Winthrop discussed how students are not just passive recipients but active participants in this dilemma, many expressing concerns that AI could be making them “dumber.” A recent survey indicated that the primary worry among young adults regarding AI is not job displacement but the potential loss of their critical thinking abilities. The implications extend beyond cognitive skills; they touch on emotional and social behaviors as well. Winthrop warned that AI’s sycophantic nature could reduce students’ capacity to accept feedback, ultimately undermining their ability to learn and grow.
Against this backdrop, Winthrop emphasized the need for a balanced approach to AI in classrooms. “We have to safeguard the human-to-human interactions that are essential for learning,” she stated, advocating for a model where educational environments prioritize personal connections. This involves not just integrating technology for the sake of modernity but ensuring that the classroom remains a nurturing space for personal and intellectual growth.
Winthrop proposed three essential strategies for educators and policymakers: to shift teaching methods to be less hackable by AI, to foster holistic AI literacy among students and families, and to implement regulatory safeguards to protect against unsafe AI practices. “We need to ensure kids are not accessing frontier model chatbots that can be harmful,” she said, urging educators to create awareness and understanding of AI’s implications for their students.
As the discussion progressed, the importance of fostering a culture of curiosity and ethical reasoning in education became a focal point. Winthrop reiterated that the skills needed for an AI-driven future are deeply human—critical thinking, ethical orientation, and a love for lifelong learning. “Young people must feel empowered to take charge of the technology that shapes their lives,” she added. “Education should be about preparing them to navigate an uncertain future, not merely training them in technical skills.”
As schools face the challenge of integrating AI effectively, the conversation around it remains urgent. The balance between harnessing AI’s benefits while safeguarding the educational experience hinges on a reevaluation of teaching practices and fostering an engaging, trusting environment for students. Winthrop’s insights serve as a reminder that the future of education in an AI world must prioritize the nurturing of human skills alongside technological advancement.
See also
Andrew Ng Advocates for Coding Skills Amid AI Evolution in Tech
AI’s Growing Influence in Higher Education: Balancing Innovation and Critical Thinking
AI in English Language Education: 6 Principles for Ethical Use and Human-Centered Solutions
Ghana’s Ministry of Education Launches AI Curriculum, Training 68,000 Teachers by 2025
57% of Special Educators Use AI for IEPs, Raising Legal and Ethical Concerns





















































