As generative artificial intelligence (AI) becomes increasingly integrated into academic settings, institutions grapple with its implications on pedagogy and student development. English professor Dan Cryer from Johnson County Community College in Kansas, likens the use of AI tools for writing college essays to “bringing a forklift to the gym.” He emphasizes that while AI can move the workload, it does not aid in the essential development of critical thinking and writing skills necessary for student growth.
Cryer notes that the introduction of AI not only raises questions about academic integrity but also adds to the workload for educators, as they now face challenges in evaluating the originality of student submissions. His institution, like many across the United States, offers student access to these AI tools, thus complicating the discussion around their responsible usage. “It’s not fair to them,” Cryer states about students navigating the fine line between ethical and unethical use of AI.
Since the launch of ChatGPT over three years ago, generative AI has permeated daily life, compelling both professors and students to reconsider its role, particularly in humanities courses. A recent survey by Inside Higher Ed and the Generation Lab reveals that about 85% of undergraduates utilize AI for tasks such as brainstorming and studying, with approximately 19% admitting to using it for entire essays. More than half of these students expressed ambivalence, acknowledging that while AI sometimes aids their studies, it may also hinder deeper engagement with the material.
Aysa Tarana, a recent graduate from the University of Minnesota Twin Cities, began using AI for minor tasks like generating topic ideas but eventually ceased its use, feeling she was “outsourcing my thinking.” This sentiment echoes Cryer’s concerns, as he strives to instill in his students that the educational journey is about developing thought processes rather than merely producing output.
After studying generative AI during a sabbatical, Cryer concluded that educators should minimize their reliance on these tools. He articulates that the objective of a college education is the process of learning, not simply the final product, stating, “What we need is students to go through the process of writing research papers so they can become better thinkers.” He warns that an overdependence on AI may deprive students of the education they seek.
A Divergent Perspective on AI
In contrast, Leslie Clement, an English professor at Johnson C. Smith University in Charlotte, North Carolina, views generative AI as a collaborative tool that can enhance educational experiences. Clement encourages responsible AI use among her students, allowing them to harness its capabilities for outlining papers and generating feedback. She has also co-developed a course titled “African Diaspora and AI,” which explores the global impacts of AI, including ethical considerations related to resource extraction in the Democratic Republic of Congo.
Clement emphasizes the importance of teaching students to interrogate AI tools critically, advocating for a balanced approach where technology serves as an aid rather than a crutch. “We’re looking at Afrofuturism, how students can use these tools to reimagine their futures,” she adds, reflecting her commitment to fostering inclusive and ethical discourse.
In Durham, North Carolina, pre-med student Anjali Tatini uses AI as an educational ally to navigate complex subjects in her dual major of global health and neuroscience. She recalls utilizing Google’s AI chatbot, Gemini, to clarify difficult biology concepts, appreciating its ability to adjust explanations to her level of understanding. Tatini also employs AI for generating practice problems and brainstorming in various classes, valuing the flexibility it offers amidst her busy schedule.
While she finds AI beneficial for organization and support, she remains steadfast in her decision not to allow it to write for her, stating, “If I’m putting something out, I want it to be something that I’m proud to say this is mine.” Similarly, Hannah Elder, a junior at the University of North Carolina, cherishes her autonomy in writing. A pre-law student, she uses AI for proofreading and ensuring adherence to rubrics but avoids relying on it for idea generation, believing that the act of articulating her thoughts is crucial for her academic development.
Elder asserts that while the integration of AI in education is inevitable, it should be approached thoughtfully. “If teachers incorporate it in a responsible way through academics, I think it’ll be seen less as a cheat code and more as a reality,” she concludes, illustrating a broader call for educational institutions to adapt to the evolving landscape of technology in learning.
See also
OpenAI’s Rogue AI Safeguards: Decoding the 2025 Safety Revolution
US AI Developments in 2025 Set Stage for 2026 Compliance Challenges and Strategies
Trump Drafts Executive Order to Block State AI Regulations, Centralizing Authority Under Federal Control
California Court Rules AI Misuse Heightens Lawyer’s Responsibilities in Noland Case
Policymakers Urged to Establish Comprehensive Regulations for AI in Mental Health





















































