Connect with us

Hi, what are you looking for?

AI Education

Swarthmore Faculty Discuss Generative AI’s Impact on Higher Education and Policy Needs

Swarthmore faculty warn that over-reliance on generative AI may undermine critical thinking and creativity, urging policies to balance innovation with educational integrity.

As the landscape of higher education continues to evolve, the integration of generative artificial intelligence (AI) stands out as a compelling topic of discussion. Faculty members from Swarthmore College recently shared their varied perspectives on this issue, emphasizing both the potential benefits and challenges that generative AI poses for education, particularly in the liberal arts.

Syon Bhanot, Associate Professor of Economics, states, “Generative AI is here, and we cannot pretend it is not.” He encourages a proactive adaptation to this technology rather than attempting to exclude it. Bhanot has made adjustments to his course assessments to mitigate the impact of AI-induced shortcuts, while also recognizing situations where AI can enhance learning, such as aiding in coding tasks with software like Stata or LaTeX. He advocates for a strategic incorporation of AI tools to alleviate faculty workload and improve overall morale, suggesting a role for AI in enhancing efficiency within academic environments.

Sibelan Forrester, Sarah W. Lippincott Professor of Modern and Classical Languages, brings a nuanced perspective, particularly regarding AI in translation and interpretation. While she acknowledges the utility of tools like Google Translate, she notes significant limitations. Forrester highlights the challenges of translating less common languages and the inadequacy of AI in grasping figurative language, which often leads to poor translations. “The precise meaning of an original is less important than its sound and rhythm,” she explains, emphasizing the cultural nuances that AI fails to capture. This raises broader questions about the reliability of AI in maintaining the richness of human communication.

Emily Gasser ’07, Associate Professor of Linguistics, is candid in her criticism, labeling generative AI as “synthetic text extruding machines.” She asserts that such systems lack the ability to think or analyze deeply, producing outputs that require rigorous verification. Gasser warns against the dangers of relying on AI for scholarly work, suggesting that its use undermines the fundamental purpose of education: to foster critical thinking and creativity. Furthermore, she cites real-world consequences stemming from AI misinformation, ranging from legal mishaps to health hazards.

Advertisement. Scroll to continue reading.

Sam Handlin ’00, Associate Professor of Political Science, advocates for a middle-ground approach to AI usage in academia. He acknowledges the ubiquity of AI in modern tools but expresses concern about its cognitive impact on students. Recent studies indicate that reliance on AI for writing and reasoning could lead to diminished cognitive skills. He posits the need for a college-wide policy to manage AI use, particularly in tasks requiring complex cognitive effort.

Emad Masroor, Visiting Assistant Professor of Engineering, echoes concerns about the detrimental effects of relying on generative AI. He argues that these tools often provide an “illusion of knowledge,” shortcutting the intellectual processes that are essential to true learning. He warns that such tools may erode students’ abilities to write well, think critically, and engage deeply with content, compromising the very essence of education.

Warren Snead, Assistant Professor of Political Science, emphasizes the importance of the writing process in education. He argues that writing assignments promote critical thinking and articulate idea development, essential traits for a well-rounded education. The introduction of AI tools threatens to diminish this valuable process. “The value in writing assignments is that it pushes students to think,” he asserts, advocating for department-level autonomy in establishing AI policies.

Donna Jo Napoli, Professor of Linguistics and Social Justice, expresses a similar sentiment, indicating that while AI can be useful for mundane tasks, it should not replace the critical engagement that education fosters. She urges students to embrace the challenges of learning, as those struggles are integral to personal and intellectual growth.

Advertisement. Scroll to continue reading.

Federica Zoe Ricci, Assistant Professor of Statistics, presents a balanced view, recognizing both the potential and pitfalls of AI in academia. While she sees value in using AI for repetitive tasks, she warns that over-reliance can hinder skill development and social interactions crucial for success. She believes that students must learn to engage with AI without compromising their educational experience.

The perspectives shared by these faculty members reflect a significant discourse on the implications of generative AI in higher education. As institutions navigate this complex landscape, the challenge remains in finding a balance between leveraging the benefits of AI and preserving the integrity of the educational process.

David Park
Written By

At AIPressa, my work focuses on discovering how artificial intelligence is transforming the way we learn and teach. I've covered everything from adaptive learning platforms to the debate over ethical AI use in classrooms and universities. My approach: balancing enthusiasm for educational innovation with legitimate concerns about equity and access. When I'm not writing about EdTech, I'm probably exploring new AI tools for educators or reflecting on how technology can truly democratize knowledge without leaving anyone behind.

You May Also Like

© 2025 AIPressa · Part of Buzzora Media · All rights reserved. This website provides general news and educational content for informational purposes only. While we strive for accuracy, we do not guarantee the completeness or reliability of the information presented. The content should not be considered professional advice of any kind. Readers are encouraged to verify facts and consult appropriate experts when needed. We are not responsible for any loss or inconvenience resulting from the use of information on this site. Some images used on this website are generated with artificial intelligence and are illustrative in nature. They may not accurately represent the products, people, or events described in the articles.