A federal judge has sanctioned two attorneys from New Orleans’ law department for using artificial intelligence to produce false case citations in a court filing. The attorneys, Assistant City Attorney Jalen Harris and Deputy City Attorney James Roquemore, tendered their resignations following the findings of the investigation.
According to local news outlet WDSU, the sanctions were imposed by U.S. District Judge Carl Barbier after he uncovered nine fictitious case citations in a motion filed by the attorneys in January 2026. The motion was part of a lawsuit against the city of New Orleans, former Mayor LaToya Cantrell, the New Orleans Police Department, and several officers, where the plaintiff alleges violations of his civil rights.
During a court hearing in March, Harris acknowledged that he had utilized ChatGPT for legal research but admitted to failing to verify the authenticity of the AI-generated citations. He stated that he initially consulted Westlaw, an established online legal research platform, but turned to AI to expedite the process. Harris expressed remorse for his actions, apologizing multiple times in court.
Roquemore, who was responsible for reviewing the motion, also offered an apology. He failed to question the unusual formatting of the citations, which should have raised red flags about their validity. The court noted that Roquemore bore greater responsibility due to his supervisory role over Harris.
In addition to their resignations, Judge Barbier imposed fines of $250 on Harris and $1,000 on Roquemore for their misconduct. This incident underscores the ethical concerns surrounding the use of AI in legal processes, particularly the necessity for thorough verification of information, regardless of the source.
The repercussions of this case extend beyond the individual attorneys, highlighting the growing scrutiny of AI’s role in legal practices. As the technology continues to advance, legal professionals are urged to remain vigilant about the accuracy of information generated by AI tools. The need for guidelines and best practices for integrating AI into legal work is becoming increasingly urgent as incidents like this raise questions about accountability and the reliability of AI-generated content.
See also
AI Technology Enhances Road Safety in U.S. Cities
China Enforces New Rules Mandating Labeling of AI-Generated Content Starting Next Year
AI-Generated Video of Indian Army Official Criticizing Modi’s Policies Debunked as Fake
JobSphere Launches AI Career Assistant, Reducing Costs by 89% with Multilingual Support
Australia Mandates AI Training for 185,000 Public Servants to Enhance Service Delivery




















































