Arizona State University faculty are advising against the use of hidden AI prompts in academic assignments, citing concerns over their reliability and potential accessibility risks for students. This guidance, developed within the College of Integrative Sciences and Arts (CISA), emerges as universities explore methods to detect AI-generated work while upholding academic integrity.
The recommended approach involves embedding invisible text or instructions within assignment materials, designed to influence outputs when students input this content into AI tools. Faculty members assert that this tactic is ineffective and could lead to unintended consequences for both students and instructors.
In a recent LinkedIn post, Adam Pacton, Dean’s Fellow for AI Literacy and Integration at ASU, emphasized that such hidden prompts do not meet the necessary standards for academic integrity investigations. He stated, “First, in our college ‘evidence’ of AI use generated through hidden prompts isn’t sufficient for a formal academic integrity inquiry. It’s a trap that doesn’t ‘catch’ anything.”
The guidance highlights that AI systems do not respond consistently to hidden instructions. Consequently, the same prompt may either be ignored, partially followed, or replicated in ways that are indistinguishable from standard student writing. Thus, specific phrases or outputs cannot be considered conclusive proof of misconduct.
Moreover, the efficacy of hidden prompts is limited, primarily affecting those students who directly copy and paste assignment text into AI tools, while others may navigate these checks undetected. The faculty’s critique also points to significant accessibility issues. Hidden text in digital materials could be identified by screen readers and other assistive technologies, potentially exposing instructions to certain students while leaving others in the dark.
Pacton further warned, “Second, invisible text in digital course materials may violate the 2024 federal ADA Title II rule. Screen readers can surface hidden instructions in confusing ways to students using assistive technology. That’s not a quirk of detection failure; that’s an accessibility failure.” Such discrepancies risk creating inconsistent student experiences within the same assignment, particularly for those relying on assistive tools, and could lead to compliance violations under federal accessibility standards.
The implications extend beyond technical issues, as the guidance underscores the broader impact on classroom dynamics. Faculty express concern that detection-based tactics may foster an “adversarial” atmosphere, where students feel instructors are more focused on catching them than on facilitating their learning. The guidance notes that some students are already familiar with these detection methods, diminishing their effectiveness while potentially increasing distrust in assessment practices.
According to the guidance, this approach offers limited benefits for detecting AI use, alongside potential reputational and pedagogical drawbacks. In lieu of using detection methods, faculty recommend redesigning assignments to make AI offloading more challenging or less relevant. Suggested strategies include requiring personal reflections, incorporating staged drafts with revision tracking, and embedding assignments within in-class discussions.
Instructors are also encouraged to establish verification points, allowing students to explain their processes in their own words, and to clearly delineate permissible uses of AI tools for each assignment. Pacton noted that this framework is not formal policy but part of an ongoing effort to develop more effective approaches within the college, characterizing it as “us learning out loud, working across roles, and trying to do right by students and colleagues alike within the college, the university, and the larger sector.”
As universities grapple with the challenges presented by AI technologies, ASU’s guidance reflects a critical shift toward fostering transparency and trust in academic environments, aiming to support both student learning and integrity. The dialogue around AI’s role in education is evolving, and the focus on holistic assignment design may pave the way for more equitable and effective teaching practices.
See also
Andrew Ng Advocates for Coding Skills Amid AI Evolution in Tech
AI’s Growing Influence in Higher Education: Balancing Innovation and Critical Thinking
AI in English Language Education: 6 Principles for Ethical Use and Human-Centered Solutions
Ghana’s Ministry of Education Launches AI Curriculum, Training 68,000 Teachers by 2025
57% of Special Educators Use AI for IEPs, Raising Legal and Ethical Concerns



















































