In a revealing incident last spring, Illinois county judge Jeffrey Goffinet discovered a legal brief in his courtroom that referenced a case that did not exist. Goffinet, an associate judge in Williamson County, initially searched through two legal research systems and then visited the courthouse library—a place he hadn’t frequented in years—to consult the book that supposedly listed the nonexistent case. The revelation of this fabricated case, generated by artificial intelligence, surfaced just months after the Illinois Supreme Court implemented a policy governing AI use in the courts.
Goffinet, who co-chaired the task force responsible for the policy, emphasized the need for coexistence with AI technology. “People are going to use [AI], and the courts are not going to be able to be a dam across a river that’s already flowing at flood capacity,” he remarked. As the prevalence of false quotes, fake court cases, and erroneous information generated by AI increases, state bar associations and national law organizations are formulating guidelines to address these challenges in the legal sector.
The impact of AI-generated misinformation can extend across various legal areas, from divorce cases to discrimination lawsuits, potentially leading to evidence being dismissed or motions being denied. While some regions encourage legal professionals to adhere to existing guidance regarding accuracy and transparency, recent policies specifically address concerns about AI related to confidentiality, competency, and costs. Significantly, many guidelines urge attorneys to educate themselves about the AI tools they use and to prefer proprietary systems that secure sensitive data, given the risks associated with open-source platforms.
Ohio has taken a more stringent approach, prohibiting the use of AI for certain legal tasks, such as translating legal documents that could influence case outcomes. Other states have recommended adherence to the American Bar Association’s formal opinions on ethical AI use in law, underscoring the importance of accountability in legal practices.
While AI can assist attorneys by automating administrative tasks, analyzing contracts, and organizing documents, the misuse of AI has led to fines and license suspensions for legal professionals submitting documents with fabricated information. Rabihah Butler, of the Thomson Reuters Institute, expressed concern that many legal professionals might overlook instances where AI systems produce false information, a phenomenon known as “hallucination.” “AI has such confidence, and it can appear so polished,” Butler noted, adding that due diligence is critical to avoid treating AI hallucinations as factual statements.
According to a database maintained by Damien Charlotin, a senior research fellow at HEC Paris, there have been 518 documented instances since the beginning of 2025 in which generative AI produced misleading content used in U.S. courts. Charlotin remarked that the institutional response to these issues remains limited, as many legal entities grapple with how to manage the implications of AI errors in the courtroom.
As of January 23, at least 10 states and the District of Columbia have issued formal guidance on AI usage by legal professionals, often in the form of ethics opinions. For instance, the Professional Ethics Committee for the State Bar of Texas advised that lawyers should possess a basic understanding of generative AI tools and verify any content produced by AI before use. Brad Johnson, executive director of the Texas Center for Legal Ethics, highlighted the necessity for attorneys to evaluate their competency with AI tools to mitigate associated risks effectively.
States including Arizona, California, and New York have developed policies to govern AI use among legal professionals. In Illinois, for example, lawyers are permitted to utilize AI without mandatory disclosure, although judges retain ultimate responsibility for their decisions. Goffinet emphasized the importance of human oversight, stating, “We cannot abdicate our humanity in favor of an AI-generated decision or opinion.”
Legislative efforts are also underway to ensure responsible AI use in legal contexts. In Louisiana, a law was enacted requiring attorneys to exercise “reasonable diligence” in verifying the authenticity of evidence, including AI-generated content. Similarly, California’s proposed legislation mandates that attorneys take precautions to safeguard confidential information and verify the accuracy of generative AI material.
Education on AI’s potential and risks is crucial, according to legal experts. Michael Hensley, a counsel at FBT Gibbons, advocated for training sessions on AI in law schools and state bar associations to ensure that upcoming attorneys are equipped to navigate the complexities of AI technology. A survey conducted by Bloomberg Law indicated that more than half of law firms had invested in generative AI tools, with many attorneys employing AI for tasks such as legal research and document drafting. Despite growing comfort with AI, concerns regarding reliability, ethical dilemmas, and data privacy continue to deter some legal professionals from embracing the technology fully.
As courts remain cautious about AI’s influence, particularly regarding evidence integrity, the importance of education and procedural safeguards becomes increasingly clear. Diane Robinson, a principal court research associate at the National Center for State Courts, noted that while AI offers the potential for improved case processing, the challenge of managing altered evidence and AI-generated inaccuracies persists. “Fake evidence is nothing new,” Robinson remarked, reflecting on the historical challenges courts have faced with evolving technology. Moving forward, establishing robust processes for awareness and training will be imperative to mitigate the risks associated with AI in the legal field.
See also
OpenAI’s Rogue AI Safeguards: Decoding the 2025 Safety Revolution
US AI Developments in 2025 Set Stage for 2026 Compliance Challenges and Strategies
Trump Drafts Executive Order to Block State AI Regulations, Centralizing Authority Under Federal Control
California Court Rules AI Misuse Heightens Lawyer’s Responsibilities in Noland Case
Policymakers Urged to Establish Comprehensive Regulations for AI in Mental Health
















































