Connect with us

Hi, what are you looking for?

AI Education

Oxford Professor Reveals 3 Essential Ways Schools Can Shape AI Literacy for Students

Oxford Professor Rebecca Eynon urges schools to empower students to shape AI, advocating for critical thinking, digital inclusion, and shared societal responsibility in education.

As artificial intelligence (AI) continues to reshape educational landscapes, Professor Rebecca Eynon from the Oxford Internet Institute and the University of Oxford’s Department of Education warns that schools may be imparting misguided lessons. Eynon asserts that education should not merely prepare students to navigate technology but should empower them to actively shape it. She emphasizes that a proactive approach, rather than a reactive stance, is essential for integrating AI into educational frameworks.

In her research with Oxford’s Towards Equity-Focused EdTech Project, Eynon found that many students lack the digital skills that adults often assume they possess. Common struggles include basic tasks like managing files or sending emails, with teachers themselves often uncertain about how to embed digital literacy into their curricula. This highlights an urgent need for a comprehensive strategy that goes beyond technical skills to include critical thinking, inclusion, and ethical responsibility in the context of AI.

Critical Thinking Over Coding

Eynon stresses that digital literacy must transcend mere technical skills, such as recognizing misinformation or using AI tools safely. Students need to understand the broader social, political, and economic systems that influence the technologies they engage with. She stated, “It is important that young people are not positioned as ‘end users’ of fixed AI technologies. Instead, they should be supported in becoming citizens who can use and engage with technology critically in the richest sense — including awareness of economic, political, and cultural issues.”

This means equipping students with the knowledge to recognize how bias infiltrates algorithms, how tech companies monetize data, and how misinformation propagates. By cultivating these critical faculties, students can learn to challenge and question AI systems instead of passively accepting them.

See alsoWSU Researchers Secure $82,500 Grant from Microsoft to Enhance AI in Rural SchoolsWSU Researchers Secure $82,500 Grant from Microsoft to Enhance AI in Rural Schools

Designing for Inclusion

Furthermore, Eynon advocates for an inclusive approach to AI education that incorporates hands-on design elements. “Design is a key aspect of digital literacy, offering students ways to reflect on and make visible social injustices while examining how technology’s affordances and values can support or hinder inclusion,” she explains. This could involve projects where students investigate bias in AI or develop digital tools tailored to their communities.

By embedding these principles across various subjects — not solely within computer science — educators can help a broader range of students identify their roles in shaping a more equitable digital future.

Collective Responsibility for AI Governance

While it’s crucial for students to learn how to critique generative AI, Eynon warns against placing the burden of rectifying flawed systems solely on them. She stresses that societal responsibility must be shared among governments, educators, and tech companies. “There is a societal responsibility that does not just fall on young people to find ways to better govern, regulate, and change AI,” Eynon points out. The expectation should not be that individuals can navigate ethical, legal, and environmental challenges surrounding AI alone.

In summary, as AI technologies continue to evolve, the imperative is clear: educational institutions must rethink their strategies. By fostering critical engagement, advocating for inclusion, and sharing responsibilities, they can prepare students not just to use technology but to actively shape it in ways that reflect societal values and ethics. This shift is not just beneficial; it’s necessary for the future of education and society alike.

David Park
Written By

At AIPressa, my work focuses on discovering how artificial intelligence is transforming the way we learn and teach. I've covered everything from adaptive learning platforms to the debate over ethical AI use in classrooms and universities. My approach: balancing enthusiasm for educational innovation with legitimate concerns about equity and access. When I'm not writing about EdTech, I'm probably exploring new AI tools for educators or reflecting on how technology can truly democratize knowledge without leaving anyone behind.

You May Also Like

Top Stories

At the 2025 Cerebral Valley AI Conference, over 300 attendees identified AI search startup Perplexity and OpenAI as the most likely to falter amidst...

Top Stories

OpenAI's financial leak reveals it paid Microsoft $493.8M in 2024, with inference costs skyrocketing to $8.65B in 2025, highlighting revenue challenges.

AI Cybersecurity

Anthropic"s report of AI-driven cyberattacks faces significant doubts from experts.

AI Technology

Cities like San Jose and Hawaii are deploying AI technologies, including dashcams and street sweeper cameras, to reduce traffic fatalities and improve road safety,...

AI Business

Satya Nadella promotes AI as a platform for mutual growth and innovation.

Top Stories

Microsoft's Satya Nadella endorses OpenAI's $100B revenue goal by 2027, emphasizing urgent funding needs for AI innovation and competitiveness.

AI Technology

Shanghai plans to automate over 70% of its dining operations by 2028, transforming the restaurant landscape with AI-driven kitchens and services.

AI Government

AI initiatives in Hawaii and San Jose aim to improve road safety by detecting hazards.

Top Stories

Omni Group enhances OmniFocus with new AI features powered by Apple's Foundation model, empowering users with customizable task automation tools.

AI Technology

Andrej Karpathy envisions self-driving cars reshaping cities by reducing noise and reclaiming space.

AI Technology

An MIT study reveals that 95% of generative AI projects fail to achieve expected results

Generative AI

OpenAI's Sam Altman celebrates ChatGPT"s new ability to follow em dash formatting instructions.

© 2025 AIPressa · Part of Buzzora Media · All rights reserved. This website provides general news and educational content for informational purposes only. While we strive for accuracy, we do not guarantee the completeness or reliability of the information presented. The content should not be considered professional advice of any kind. Readers are encouraged to verify facts and consult appropriate experts when needed. We are not responsible for any loss or inconvenience resulting from the use of information on this site. Some images used on this website are generated with artificial intelligence and are illustrative in nature. They may not accurately represent the products, people, or events described in the articles.