A recent legal filing by California State University (CSU) has raised eyebrows for its apparent reliance on artificial intelligence (AI) to generate content. Administrative Law Judge Bernhard Rohrbacher identified the document as being riddled with inaccuracies, including what he termed “phantom quotations,” leading to CSU’s admission that AI assistance had been used in its preparation.
The case in question is before the California Public Employment Relations Board and involves a dispute between CSU, the largest public four-year university system in the U.S., and the CSU Employees Union. The union is advocating for 1,400 resident assistants (RAs) who manage housing at various campuses, seeking to unionize these student workers.
Judge Rohrbacher’s order, issued on Monday, specifically ordered the striking of the CSU legal brief, which failed to accurately quote a 1981 court decision. He noted that while there was no definitive proof that AI authored the document, its characteristics were consistent with the “hallucinations” often associated with AI-generated texts. Misquotes and a lack of proper citation further undermined the integrity of the argument CSU presented.
CSU’s explanation for these errors was twofold: a “failure to double-check correct page numbering” and the erroneous use of quotation marks around paraphrased statements. However, a spokesperson for the university, Jason Maymon, later confirmed that AI tools were indeed employed during the brief’s creation. “The CSU is aware that a staff member used artificial intelligence, without conducting due diligence, to assist with creating a brief that resulted in errors undermining the integrity of their work,” Maymon stated. “This action does not align with the CSU’s ethical and responsible use of AI, and we are taking appropriate steps to address this matter.”
Implications for Unionization Efforts
This incident holds larger implications for the ongoing unionization push among resident assistants. Typically, RAs provide essential services—organizing social events and responding to emergencies—in exchange for benefits such as free housing and meal plans, but they do not receive salaries. CSU has opposed the unionization effort, contending that RAs are not employees but “live-in student leaders.”
Catherine Hutchinson, president of the CSU Employees Union, criticized the university’s use of AI in this context. “If students submit assignments with AI-generated half-truths and fabrications, they face consequences. And yet the CSU is doing exactly what we tell students not to do,” she remarked. Hutchinson emphasized the importance of recognizing the valuable contributions of RAs in their respective campus communities.
CSU’s Ambitious AI Agenda
Despite the controversy, CSU is actively pursuing a strategy to become a leader in integrating generative artificial intelligence in higher education. The university recently entered a $16.9 million deal with OpenAI for enterprise access to ChatGPT, aiming to be the “first and largest AI-empowered university system.” With plans to add AI-focused degree programs, CSU is also establishing a board that includes state officials and industry representatives from companies like Anthropic and Nvidia.
However, these initiatives have sparked concerns regarding the educational implications of AI, particularly how it affects student learning and faculty teaching methods. Critics have also raised alarms about data privacy and the financial burden of CSU’s AI investments amid ongoing budget cuts and layoffs.
CSU has published guidelines regarding the ethical use of AI, including cautionary notes on the potential for AI-generated content to be inaccurate or misleading. As highlighted by Judge Rohrbacher in his order, this serves as a critical reminder that the responsibility for vetting AI-generated content lies with the users.
As universities navigate the integration of AI technologies, CSU’s experience underscores the need for diligence and ethical considerations when leveraging these powerful tools in academic settings.
Microsoft Copilot Leads in Privacy, Collecting the Least User Data Among AI Chatbots
Grok AI’s Unusual Musk Worship Highlights Deep Connection to X Platform
Google’s Gemini 3 AI Chatbot Challenges User on 2025 Claim Due to Settings Error
Morpheus Launches AI SOC Platform for MSSPs, Automating Microsoft Security Management



















































