Skip to content
AI & Future Technologies

GenAI hallucinations are still pervasive in legal filings, but better lawyering is the cure

Zach Warren  Senior Manager / Legal Enterprise Content / Thomson Reuters Institute

· 8 minute read

Zach Warren  Senior Manager / Legal Enterprise Content / Thomson Reuters Institute

· 8 minute read

Generative AI hallucinations continue to plague unwary attorneys and pro se litigants, recent research finds, emphasizing the ongoing need for careful verification of case citations

Key insights:

      • Hallucinatory case citations a concern — Hallucinations continue to be an issue for attorneys, as courts across the U.S. have sanctioned attorneys and pro se litigants for submitting filings with AI-generated, non-existent case citations.

      • Attorneys still responsible — While these hallucinations may occur as a result of improper AI usage, it is the duty of the attorney to check all facts and citations before submitting a document to court, just as it has always been.

      • Ethical guidelines and accuracy checks needed — As GenAI remains central to legal workflows, lawyers must integrate AI responsibly by following ethical guidelines and always checking the accuracy of AI-generated content before submission.


As the legal world marches on towards three years with generative AI (GenAI) in the public sphere, one key risk has risen above all others: hallucinations. These hallucinations are false “facts” generated by GenAI systems and can occur due a number of issues, including incomplete or inaccurate data sets, confusing or misworded prompts, or answers that are irrelevant to a given question.

It’s clear that no matter a hallucination’s providence, however, the possibility of a false fact has slowed GenAI’s growth among legal professionals. Among respondents to 2025 Generative AI in Professional Services Report who said they felt GenAI should not be a part of their daily work, 40% cited accuracy and reliability as their primary concern — nearly double any other major concern, including a lack of human touch (22%), generality of outputs (19%), or biased data (12%).

Perhaps this should not be a surprise, given continued press coverage around hallucinations. Even with accuracy top of mind, AI misuse is continuing with regularity in courts across the United States. In fact, a recent study of cases across the US for the month of July has found numerous false case citations, leading in many cases to attorney sanctions or discipline.

These AI errors and related sanctions are easily avoidable, however. It just takes awareness of how GenAI tools operate, and — as lawyers have had to do for years with non-AI generated research and briefs — a commitment to verifying any material before it is submitted to the court.

Hallucinations abound

According to a study conducted through Thomson Reuters Westlaw of cases between June 30 and August 1, hallucinations and citations of non-existent legal cases continue to be pervasive across courts. This search found 22 different cases in which courts or opposing parties found non-existent cases within filings, leading to discipline motions or sanctions in many instances.

Notably, although much of the discussion around AI in law has tended to be around large-scale litigation or complex corporate law, many of the AI errors came from local disputes. These include a fight between a family and a local school board (Powhatan County School Board v. Skinger, U.S. District Court for the Eastern District of Virginia), a divorce case (In re Marriage of Haibt, Colorado Court of Appeals), and a Chapter 13 bankruptcy case (In re Martin, U.S. Bankruptcy Court for the Northern District of Illinois). As the research makes clear, hallucinations are prevalent in all areas of law, which may not be a surprise given how pervasive ChatGPT and other public GenAI tools have become.

These cases also show that hallucinations could be a particular stumbling block for pro se litigants who may be looking to public GenAI tools as an easy lawyering fix. In Powhatan County School Board, a pro se defendant was found to have submitted pleadings “laden with more than three dozen (42 to be exact) of citations to nonexistent legal authorities that made it exceedingly difficult, and often impossible, to make sense of the contentions made therein, to assess the purported ‘support’ for them, and properly to address them.”

Given the defendant’s pro se status, the court originally offered the chance to fix the filing, but after the defendant doubled down and tried to claim the court “wrongly assumed” the citations were AI generated, the court dismissed the defendant’s motion to strike the original opinion from the record.


These AI errors… are easily avoidable, however. It just takes awareness of how GenAI tools operate, and — as lawyers have had to do for years with non-AI generated research and briefs — a commitment to verifying any material before it is submitted to the court.


“The fact that her citations to nonexistent legal authority are so pervasive, in volume and in location throughout her filings, can lead to only one plausible conclusion: that an AI program hallucinated them in an effort to meet whatever [the defendant’s] desired outcome was based on the prompt that she put into the AI program,” the court wrote in denying the defendant’s motion.

That does not mean that professional lawyers are using GenAI perfectly, however. One immigration case out of U.S. District Court in New Mexico, Deghani v. Castro, illustrates the issue with attorneys not understanding technology and its potential misuse. In this case, the plaintiff’s attorney, Felipe Millan, contracted with a freelance attorney to conduct research, and according to Millan, the freelancer returned a brief with several hallucinated cases, which he did not check. The court referred Millan to the state bar for sanctions, but he filed a motion to stay, arguing that the punishment did not fit the crime.

The court did not buy that argument, however, upholding the sanctions ruling. “Mr. Millan’s primary grievance is that [the Judge] did not appropriately weigh his good intentions. He emphasizes that he himself did not invent the citations, did not expect the contracted attorney to do so, and has been candid and remorseful regarding the mistake,” the court wrote.

“But, as discussed above, the standard under Rule 11 is one of objective reasonableness — the imposition of sanctions does not require a finding of subjective bad faith by the offending attorney. An attorney who acts with ‘an empty head and a pure heart’ is nonetheless responsible for the consequences of his actions.”

Check and check again

Indeed, all of these issues have one key factor in common — the lawyers in question did not check their sources. Millan may have been remorseful and attempted to correct the mistake, but a mistake was made nonetheless. And this can even happen among knowledgeable attorneys when time and deadlines get in the way.

In another case, Kaur v. Desso from U.S. District Court in the Northern District of New York, the court explicitly found that the plaintiff’s attorney “admits that he was aware at the time that AI tools are known to ‘hallucinate’ or fabricate legal citations and quotations,” but he felt pressured to rush the pleading due to imminent deportation. Nevertheless, the court imposed a $1,000 fine and mandated CLE training on AI for the attorney, saying that the need to check whether the assertions and quotations generated were accurate trumps all.


Although much of the discussion around AI in law has tended to be around large-scale litigation or complex corporate law, many of the AI errors came from local disputes.


Some lawyers have expressed a reluctance to use GenAI tools unless they are 100% accurate. And to be sure, some tools are more accurate than others, especially tools that have access to more robust and trusted data sets. However, given the nature of how the technology predicts the next word in a sequence, including generated legal citations, by definition no GenAI tool will be accurate 100% of the time.

At the same time, however, no associate or partner will also be 100% accurate. The key to preventing errors has always been human intuition and checking research results before any brief or document is submitted. This does not change whether research is done manually with books, electronically with historic research systems, or with modern research systems that use AI.

This is true not only from an operational lens, but an ethical lens as well. A number of state bars as well as the American Bar Association (via Formal Opinion 512) have issued guidance around proper use of GenAI for attorneys. However, much of this guidance simply reframes pre-existing ethical rules for an AI world. The need for competent representation (Model Rule 1.1), to communicate with clients (Rule 1.4), to keep information confidential (Rule 1.6), and more has not changed. Lawyers need to understand how AI fits into the framework that have always been a guiding light for proper lawyering.

More than 90% of legal professionals say they believe AI will be central to their workflow within the next five years, according to the GenAI Report. Thus, the fear of hallucinations may not disappear any time soon either. That means attorneys need to adjust their workflows to properly adapt to an AI future by figuring out how AI fits into the pre-existing research workflows and preventing hallucinations from making their way into briefs or documents.


Register now for The Emerging Technology and Generative AI Forum, a cutting-edge conference that will explore the latest advancements in GenAI and their potential to revolutionize legal and tax practices.

More insights