As artificial intelligence (AI) becomes increasingly integrated into everyday life, educators and policymakers are advocating for AI ethics and misinformation literacy to be regarded as essential skills for students, akin to numeracy and language abilities. The National Curriculum Framework for School Education (2023) has introduced AI learning from Class 3, signifying that the digital landscape is no longer a niche but a fundamental aspect of children’s education.
“Children are already immersed in digital systems governed by algorithms and synthetic content,” stated Dr. Ranjana Kumari, Director of the Centre for Social Research. She emphasized that ethical awareness and misinformation literacy are vital for young people to navigate digital spaces confidently and autonomously.
Experts caution that without foundational knowledge in AI ethics and critical thinking, students may be susceptible to manipulation and exposure to harmful content. Dr. Kumari noted that early grounding in these areas equips youth to question manipulation and recognize harmful content, thereby building their agency. She highlighted, “With the rise of deepfakes, algorithmic amplification, and gendered misinformation, students need to understand not only how technology works but also how it can be misused.”
As misinformation becomes more sophisticated and emotionally persuasive, the ability to verify content, recognize bias, and understand consent is crucial for digital well-being, especially for girls and marginalized groups who face disproportionate online harm.
The introduction of AI risks and digital verification for children aged eight to ten is increasingly seen as both timely and necessary. Today’s children encounter screens, videos, and social platforms earlier than previous generations, often lacking an understanding of how content is generated, edited, or manipulated. “Starting early helps children internalize safety norms much like they learn reading or numeracy,” Dr. Kumari explained, enabling them to recognize upcoming risks such as deepfakes and online deception before harm occurs.
Furthermore, early education can alleviate parental anxiety in an era where AI-driven misinformation and impersonation are becoming increasingly difficult to detect. The Ministry of Education and the Department of School Education and Literacy (DoSE&L) are at the forefront of anchoring AI literacy within national frameworks. However, experts insist that curriculum development should be multidisciplinary.
Dr. Kumari stressed, “A credible AI literacy curriculum cannot be built in silos,” highlighting the need for collaboration among educators, technologists, behavioral scientists, child-rights advocates, and civil society groups. To remain relevant, the curriculum must undergo periodic review, incorporate global learning, and ground itself in the everyday realities of Indian life, particularly for girls and marginalized groups. Embedding principles of dignity, ethics, and safety ensures that AI learning is not only technically sound but also socially just and future-ready.
Schools are encouraged to integrate digital verification and AI awareness into daily classroom learning instead of treating AI safety as an add-on. Using real-life examples—such as viral videos and common online scams—can help students grasp risks in relatable ways. Dr. Kumari also underscored the importance of strengthening school safety systems, stating that “students must feel safe reporting harm without fear of judgment.” Open discussions about deception, consent, and online abuse are essential for building trust within school ecosystems.
Teachers serve as the first line of trust for students; however, many remain inadequately prepared to address AI-driven harms, including impersonation and algorithmic bias. “Teachers cannot guide students through digital risks if they themselves are not equipped,” Dr. Kumari pointed out. Structured and continuous training aligned with national programs such as NISHTHA can help educators identify synthetic content and respond empathetically to early signs of distress, thereby reinforcing the entire digital safety chain within schools.
For AI ethics education to be effective, it must connect with students’ real digital experiences. Engaging discussions around viral challenges and misinformation episodes enable students to practice verification and reflect on the actual consequences of online harm. “TASI 2025 showed that technological governance cannot rely solely on algorithms; it must center on lived experiences and ethical design,” Dr. Kumari added. This approach can position India not only as a rapidly scaling digital economy but also as an emerging global voice in ethical technology governance, particularly within the Global South. The overarching message is clear: technology must serve people, especially those most vulnerable to harm.
See also
Microsoft CEO Demands Total AI Commitment Amid Operational Struggles and Investor Pressure
Two South Africans Excel at Google DeepMind, Pioneering AI Research Together
Guidewire’s Heritage Cloud Adoption and Olos AI Suite Poised to Transform Insurance Landscape
Sam Altman: AI’s Future Breakthrough in 2026 Will Revolutionize Memory, Not Reasoning
Google’s Nano Banana Pro Dominates AI Image Generators, Scoring 93% in 2025 Tests




















































