As artificial intelligence increasingly permeates the legal profession, concerns about its consequences are rising. Last year, a significant uptick in court sanctions against lawyers for filing briefs containing erroneous AI-generated content was noted. Notably, attorneys representing MyPillow CEO Mike Lindell were fined $3,000 each for submitting briefs with fictitious, AI-generated citations, highlighting the potential pitfalls of reliance on these technologies.
Damien Charlotin, a researcher at HEC Paris, tracks such instances globally and reports over 1,200 cases of sanctions related to AI errors, with approximately 800 originating from U.S. courts. The penalties for such infractions are escalating, exemplified by a recent federal court ruling in Oregon, which imposed a staggering $109,700 in sanctions for an attorney’s misuse of AI-generated information.
The issue isn’t confined to lower courts; state supreme courts are also grappling with these challenges. In February, Nebraska’s high court scrutinized attorney Greg Lake for submitting a brief with citations of non-existent cases. Lake attributed the errors to a malfunctioning computer that uploaded a working draft, denying any use of AI. The justices remained unconvinced and referred him for disciplinary action. A similar scenario unfolded in March within the Georgia Supreme Court.
Carla Wale, associate dean of information and technology and director of the Gallagher Law Library at the University of Washington School of Law, expressed astonishment that such mistakes continue to occur despite media coverage. She is developing AI ethics training for law students to address these challenges, noting that the ethical rules governing AI in law are not yet fully established.
“I don’t think there is a consensus beyond, ‘You have to make sure it’s correct,'” Wale said, emphasizing that lawyers are ultimately responsible for the accuracy of their filings, regardless of how the information is generated. According to her, attorneys must thoroughly validate any cases provided by AI tools to ensure their correctness.
Some jurisdictions have implemented comprehensive ethics rules that require lawyers to label AI-generated materials. This aims to facilitate the identification of documents that need closer scrutiny for inaccuracies, drawing a clear distinction between human-generated and AI-generated content. However, Joe Patrice, senior editor at Above the Law, is skeptical about the effectiveness of such labeling rules. He believes that as AI becomes deeply integrated into legal practice, maintaining compliance with these rules could become impractical.
Patrice highlighted that while AI tools can significantly aid in processing large volumes of evidence and case law, he has reservations about the emerging “agentic” systems that propose to execute entire legal tasks autonomously. “Once you obscure those middle steps, that’s where mistakes happen,” he cautioned, indicating that even diligent practitioners can overlook crucial details when they are not involved in every step of the process.
The rapid integration of AI into legal workflows raises questions about the traditional law firm business model, particularly regarding billable hours. Patrice suggested that lawyers may need to adapt their billing practices to accommodate AI’s efficiency, potentially increasing time pressure on attorneys and tempting them to accept initial AI outputs without thorough review.
“Do you slow yourself down to have that natural thinking time?” Patrice posed, reflecting on the challenges upcoming generations of lawyers may face in maintaining critical thinking skills. Wale echoed these concerns, asserting that future lawyers who effectively and ethically leverage generative AI will replace those who do not. “That’s what I think the future is,” she stated, highlighting the evolving landscape of legal practice.
In the midst of these developments, AI itself is facing legal scrutiny. In March, OpenAI, the creator of ChatGPT, was sued by Nippon Life Insurance Company of America in a federal court in Illinois. The lawsuit claims that the company provided negligent legal advice to a woman, leading to frivolous legal actions against the insurer. OpenAI has responded to the lawsuit by asserting that the complaint is without merit.
See also
Tesseract Launches Site Manager and PRISM Vision Badge for Job Site Clarity
Affordable Android Smartwatches That Offer Great Value and Features
Russia”s AIDOL Robot Stumbles During Debut in Moscow
AI Technology Revolutionizes Meat Processing at Cargill Slaughterhouse
Seagate Unveils Exos 4U100: 3.2PB AI-Ready Storage with Advanced HAMR Tech




















































