Connect with us

Hi, what are you looking for?

AI Education

UK Students Use GenAI in 90% of Assessments: Emphasizing Human Learning Essentials

Ninety percent of UK undergraduates are using generative AI in assessments, prompting universities to reinforce essential human learning principles.

Nine in ten UK undergraduates surveyed by the Higher Education Policy Institute (Hepi) reported using generative AI (GenAI) in their assessments, highlighting a shift in academic practices. Students can now produce polished assignments with diminished engagement in the underlying concepts, raising concerns that automation may undermine the essence of learning itself. This trend poses a risk of transforming intellectual struggle into a mere performance of intelligence, prompting universities to reinforce the critical notion that learning cannot be automated or outsourced. Writing, researching, and reflecting remain vital components of true understanding.

To address the opportunities and challenges posed by GenAI, educational institutions must anchor their innovations in the fundamental principles of human learning. One way to achieve this is by utilizing AI to support, rather than supplant, cognitive development. Purposefully designed AI tools can alleviate cognitive overload by providing structured explanations, examples, and formative feedback, allowing students to focus on reasoning and interpretation. However, this potential is only realized when students are explicitly taught metacognition—the ability to think about their own thinking.

Metacognitive learners become more discerning users of technology, as they learn to recognize how AI influences their thought processes. This skill can be fostered through straightforward classroom practices. For instance, students might maintain brief AI reflection logs, documenting their use of AI tools, the outputs received, and any revisions or challenges they posed to those outputs. Another effective exercise involves “compare and critique,” where students undertake a task both with and without AI, reflecting on the differences in reasoning, depth, and confidence. Additionally, “think-aloud” activities can model the evaluation of AI responses in real-time by questioning their assumptions, evidence, and tone. Such routines transform AI from a mere shortcut into a supportive scaffold, helping students cultivate awareness of their own learning habits and protecting the reflective capacity that is central to intellectual growth.

Despite the digital evolution, the foundations of effective teaching remain steadfast. Clarity is essential; students must understand not only what they are learning but also why it matters. Transparency in learning goals and assessments can transform evaluations into tools for growth rather than final judgments. Feedback, whether generated by humans or AI, should facilitate improvement, turning assessment into a continued learning experience.

Moreover, learning continues to be a profoundly social endeavor. In a world dominated by personalized digital feeds and algorithmic recommendations, universities must maintain their role as venues for dialogue, disagreement, and collaborative discovery. Group discussions, collaborative projects, and peer reviews foster empathy, intellectual curiosity, and resilience—qualities that even the most advanced machine cannot replicate. These attributes remain the hallmarks of meaningful education.

AI as a cognitive co-pilot

The most effective application of AI in higher education may arise from what can be termed extended cognitive hygiene—a disciplined approach to selecting and utilizing digital tools. Each introduction of AI should have a clear pedagogical purpose, whether it is to prompt reflection, stimulate creativity, or alleviate cognitive load. Central to this process is AI literacy, which must be taught explicitly rather than assumed. Educators can incorporate “verification tasks” into lessons that require students to cross-check AI-generated facts against reliable sources and identify inaccuracies or gaps. Exploring bias through comparative analyses of responses across different AI tools can also enhance critical evaluation skills.

Engaging students in labeling claims as “evidence-based,” “speculative,” or “unsupported” cultivates habits of critical analysis. Over time, these structured exercises can transform skepticism into skill, empowering students to engage confidently with AI while maintaining academic integrity. Additionally, prompt design can serve as a new avenue for developing critical thinking. Activities such as prompt-redrafting, where students refine their questions based on feedback, and the “question ladder,” which encourages successive improvements to a vague prompt, can reinforce this skill set. Thinking aloud while crafting questions helps students grasp how intention influences outcomes, demonstrating that effective inquiries are products of disciplined thought. In this way, AI can function as a cognitive co-pilot, enhancing rather than replacing human capabilities and allowing universities to merge digital fluency with intellectual integrity.

GenAI does not herald the demise of learning; rather, it invites a re-examination of what it truly means to learn. The objective is to harmonize the precision of machines with the empathy, creativity, and ethical awareness that are uniquely human. As GenAI continues to evolve and improve, the future of higher education hinges on the cultivation of intelligent, reflective, and compassionate individuals. The challenge ahead lies not in outsmarting AI but in outgrowing it.

Patrice Seuwou is an associate professor of learning and teaching and director of the Centre for the Advancement of Racial Equality at the University of Northampton.

If you would like advice and insight from academics and university staff delivered direct to your inbox each week, sign up for the Campus newsletter.

See also
David Park
Written By

At AIPressa, my work focuses on discovering how artificial intelligence is transforming the way we learn and teach. I've covered everything from adaptive learning platforms to the debate over ethical AI use in classrooms and universities. My approach: balancing enthusiasm for educational innovation with legitimate concerns about equity and access. When I'm not writing about EdTech, I'm probably exploring new AI tools for educators or reflecting on how technology can truly democratize knowledge without leaving anyone behind.

You May Also Like

AI Tools

Over 60% of U.S. consumers now rely on AI platforms for primary digital interactions, signaling a major shift in online commerce and user engagement.

AI Government

India's AI workforce is set to double to over 1.25 million by 2027, but questions linger about workers' readiness and job security in this...

AI Education

EDCAPIT secures $5M in Seed funding, achieving 120K page views and expanding its educational platform to over 30 countries in just one year.

Top Stories

Health care braces for a payment overhaul as only 3 out of 1,357 AI medical devices secure CPT codes amid rising pressure for reimbursement...

Top Stories

DeepSeek introduces the groundbreaking mHC method to enhance the scalability and stability of language models, positioning itself as a major AI contender.

AI Regulation

2026 will see AI adoption shift towards compliance-driven frameworks as the EU enforces new regulations, demanding accountability and measurable ROI from enterprises.

Top Stories

AI stocks surge 81% since 2020, with TSMC's 41% sales growth and Amazon investing $125B in AI by 2026, signaling robust long-term potential.

Top Stories

New studies reveal AI-generated art ranks lower in beauty than human creations, while chatbots risk emotional dependency, highlighting cultural impacts on tech engagement.

© 2025 AIPressa · Part of Buzzora Media · All rights reserved. This website provides general news and educational content for informational purposes only. While we strive for accuracy, we do not guarantee the completeness or reliability of the information presented. The content should not be considered professional advice of any kind. Readers are encouraged to verify facts and consult appropriate experts when needed. We are not responsible for any loss or inconvenience resulting from the use of information on this site. Some images used on this website are generated with artificial intelligence and are illustrative in nature. They may not accurately represent the products, people, or events described in the articles.