Connect with us

Hi, what are you looking for?

AI Government

New Orleans Attorneys Resign After AI-Generated Fake Citations in Court Filing

New Orleans attorneys resign after federal judge fines them $1,250 for using AI to create nine fictitious legal citations in a key civil rights lawsuit.

A federal judge has sanctioned two attorneys from New Orleans’ law department for using artificial intelligence to produce false case citations in a court filing. The attorneys, Assistant City Attorney Jalen Harris and Deputy City Attorney James Roquemore, tendered their resignations following the findings of the investigation.

According to local news outlet WDSU, the sanctions were imposed by U.S. District Judge Carl Barbier after he uncovered nine fictitious case citations in a motion filed by the attorneys in January 2026. The motion was part of a lawsuit against the city of New Orleans, former Mayor LaToya Cantrell, the New Orleans Police Department, and several officers, where the plaintiff alleges violations of his civil rights.

During a court hearing in March, Harris acknowledged that he had utilized ChatGPT for legal research but admitted to failing to verify the authenticity of the AI-generated citations. He stated that he initially consulted Westlaw, an established online legal research platform, but turned to AI to expedite the process. Harris expressed remorse for his actions, apologizing multiple times in court.

Roquemore, who was responsible for reviewing the motion, also offered an apology. He failed to question the unusual formatting of the citations, which should have raised red flags about their validity. The court noted that Roquemore bore greater responsibility due to his supervisory role over Harris.

In addition to their resignations, Judge Barbier imposed fines of $250 on Harris and $1,000 on Roquemore for their misconduct. This incident underscores the ethical concerns surrounding the use of AI in legal processes, particularly the necessity for thorough verification of information, regardless of the source.

The repercussions of this case extend beyond the individual attorneys, highlighting the growing scrutiny of AI’s role in legal practices. As the technology continues to advance, legal professionals are urged to remain vigilant about the accuracy of information generated by AI tools. The need for guidelines and best practices for integrating AI into legal work is becoming increasingly urgent as incidents like this raise questions about accountability and the reliability of AI-generated content.

See also
Staff
Written By

The AiPressa Staff team brings you comprehensive coverage of the artificial intelligence industry, including breaking news, research developments, business trends, and policy updates. Our mission is to keep you informed about the rapidly evolving world of AI technology.

You May Also Like

AI Regulation

Louisiana lawmakers kick off the 2026 session, proposing $88M for school funding and new regulations on AI chatbots to protect minors' data.

AI Marketing

Higher Education Marketing Institute identifies 11 SEO agencies, including Search Influence, as essential for colleges adapting to AI search, with 50% of students using...

© 2025 AIPressa · Part of Buzzora Media · All rights reserved. This website provides general news and educational content for informational purposes only. While we strive for accuracy, we do not guarantee the completeness or reliability of the information presented. The content should not be considered professional advice of any kind. Readers are encouraged to verify facts and consult appropriate experts when needed. We are not responsible for any loss or inconvenience resulting from the use of information on this site. Some images used on this website are generated with artificial intelligence and are illustrative in nature. They may not accurately represent the products, people, or events described in the articles.