As artificial intelligence (AI) technologies continue to evolve rapidly, educational institutions are grappling with how best to integrate these tools into their curricula while maintaining ethical standards. A recent survey from Copyleaks revealed that nearly 90% of university students globally are utilizing AI to assist with their studies, with about a third employing AI tools on a daily basis. This trend appears to be accelerating, as nearly 75% of students reported an increase in their AI usage since 2024.
In response to these developments, professors, particularly in computer science, are reassessing their teaching strategies. Chris Gregg, a professor in the computer science department, indicated that they are actively modifying curricula to adapt to the changing landscape. Rather than solely focusing on developing AI detection technologies, educators are shifting towards hands-on learning experiences. For instance, the CS106B course has begun implementing in-person assessments that facilitate real-time comprehension checks between teaching assistants and students.
This approach has been positively received, and plans are underway to extend it to other courses, including CS 106A. Gregg emphasized that greater emphasis is now placed on in-person midterms and finals, moving away from reliance on take-home assignments. Data from CS 106B indicates that students utilizing AI during assignments performed less effectively on exams compared to those who refrained from using such tools.
Gregg underscored the department’s responsibility to equip students with the fundamental skills needed for the workforce, especially in top tech companies like Google, Meta, and Apple. He argued that while understanding AI is crucial, it does not replace the necessity for core programming skills. Therefore, limiting AI usage in introductory courses like CS 106A and CS 106B is designed to help students confront challenges independently—a critical aspect of mastering coding fundamentals.
Despite these efforts, the rise of AI presents new hurdles. Notably, attendance at LaIR helper hours—dedicated office hours for CS 106A and CS 106B—has declined, a trend that may correlate with increased AI usage. “I hate to say this, but it’s actually true. I can’t trust anything that happens outside my eyeballs,” Gregg remarked, expressing concern about the potential for students to rely heavily on AI for assignments.
In parallel, humanities departments are implementing strict AI policies to emphasize human creativity and analysis. Marvin Diogenes, Associate Vice Provost for Undergraduate Education and Director of the Program in Writing and Rhetoric (PWR), highlighted the importance of fostering students’ unique abilities as language users. He cautioned against over-reliance on AI, asserting that it can hinder personal growth in writing and critical thinking.
PWR aims to guide students in drawing from their experiences to enhance their research and writing, countering the limitations imposed by unchecked AI usage. In response to these concerns, Stanford has initiated AI Meets Education at Stanford (AIMES), an initiative designed to provide faculty and students with resources on responsible AI usage in educational settings.
The Office of Community Standards (OCS) is also taking steps to address AI-related academic integrity issues. Interim Director Lawrence Marshall noted that the center is collaborating with the Academic Integrity Workshop to identify potential areas of dishonesty related to AI use. He advised caution, likening inappropriate AI usage to paying for a gym membership without actually working out. Marshall informed that disciplinary actions related to improper AI use are becoming increasingly common, and students often underestimate the repercussions of violating the Honor Code.
While stringent AI policies are prevalent in early undergraduate courses, advanced classes allow for greater flexibility. In capstone courses like CS 194 and CS 210, AI usage is explicitly encouraged. “It would almost be wrong to say you can’t use these [tools], because what’s the point?” Gregg noted, reflecting on how AI can enhance project outcomes while still allowing students to demonstrate their capabilities.
This leniency extends to graduate-level courses, where Kenneth Goodson, Vice Provost for Graduate Education, observed that students generally possess a more mature understanding of AI’s role in their academic work. He advocated for a tailored approach to AI policies, allowing faculty to adapt guidelines to their specific disciplines and expertise.
As AI continues to permeate educational landscapes, graduate students are increasingly incorporating AI into their research projects. Goodson noted a growing trend of AI appearing in thesis titles, suggesting that students are pushing the boundaries of knowledge in their fields. However, he cautioned that the advent of AI also raises new questions about its appropriate use in academia.
In this rapidly changing environment, educators are striving to strike a balance between harnessing the potential of AI and preserving the essential learning experiences that underpin education. Goodson emphasized the importance of students taking ownership of their learning journeys, as they navigate an ever-evolving landscape shaped by AI technologies.
See also
OpenAI’s Rogue AI Safeguards: Decoding the 2025 Safety Revolution
US AI Developments in 2025 Set Stage for 2026 Compliance Challenges and Strategies
Trump Drafts Executive Order to Block State AI Regulations, Centralizing Authority Under Federal Control
California Court Rules AI Misuse Heightens Lawyer’s Responsibilities in Noland Case
Policymakers Urged to Establish Comprehensive Regulations for AI in Mental Health



















































