As artificial intelligence (AI) continues to reshape educational landscapes, Professor Rebecca Eynon from the Oxford Internet Institute and the University of Oxford’s Department of Education warns that schools may be imparting misguided lessons. Eynon asserts that education should not merely prepare students to navigate technology but should empower them to actively shape it. She emphasizes that a proactive approach, rather than a reactive stance, is essential for integrating AI into educational frameworks.
In her research with Oxford’s Towards Equity-Focused EdTech Project, Eynon found that many students lack the digital skills that adults often assume they possess. Common struggles include basic tasks like managing files or sending emails, with teachers themselves often uncertain about how to embed digital literacy into their curricula. This highlights an urgent need for a comprehensive strategy that goes beyond technical skills to include critical thinking, inclusion, and ethical responsibility in the context of AI.
Critical Thinking Over Coding
Eynon stresses that digital literacy must transcend mere technical skills, such as recognizing misinformation or using AI tools safely. Students need to understand the broader social, political, and economic systems that influence the technologies they engage with. She stated, “It is important that young people are not positioned as ‘end users’ of fixed AI technologies. Instead, they should be supported in becoming citizens who can use and engage with technology critically in the richest sense — including awareness of economic, political, and cultural issues.”
This means equipping students with the knowledge to recognize how bias infiltrates algorithms, how tech companies monetize data, and how misinformation propagates. By cultivating these critical faculties, students can learn to challenge and question AI systems instead of passively accepting them.
See also
WSU Researchers Secure $82,500 Grant from Microsoft to Enhance AI in Rural SchoolsDesigning for Inclusion
Furthermore, Eynon advocates for an inclusive approach to AI education that incorporates hands-on design elements. “Design is a key aspect of digital literacy, offering students ways to reflect on and make visible social injustices while examining how technology’s affordances and values can support or hinder inclusion,” she explains. This could involve projects where students investigate bias in AI or develop digital tools tailored to their communities.
By embedding these principles across various subjects — not solely within computer science — educators can help a broader range of students identify their roles in shaping a more equitable digital future.
Collective Responsibility for AI Governance
While it’s crucial for students to learn how to critique generative AI, Eynon warns against placing the burden of rectifying flawed systems solely on them. She stresses that societal responsibility must be shared among governments, educators, and tech companies. “There is a societal responsibility that does not just fall on young people to find ways to better govern, regulate, and change AI,” Eynon points out. The expectation should not be that individuals can navigate ethical, legal, and environmental challenges surrounding AI alone.
In summary, as AI technologies continue to evolve, the imperative is clear: educational institutions must rethink their strategies. By fostering critical engagement, advocating for inclusion, and sharing responsibilities, they can prepare students not just to use technology but to actively shape it in ways that reflect societal values and ethics. This shift is not just beneficial; it’s necessary for the future of education and society alike.

















































