In Fletcher v. Experian Information Solutions, Inc., the U.S. Court of Appeals for the Fifth Circuit sanctioned a lawyer $2,500 for filing a reply brief with several “hallucinated” case citations and then providing evasive responses to the court when it asked the lawyer about those references. Chief Judge Jennifer Elrod wrote the opinion, which was published Feb. 18, and was joined by Judges Jerry Smith and Cory Wilson.
The opinion provides a strong statement about the Fifth Circuit’s view of this highly relevant topic, as well as practical pointers to avoid this kind of problem going forward.
The underlying dispute was a claim for alleged violations of the Fair Credit Reporting Act against a lender and a credit reporting agency. The district court sanctioned plaintiff’s counsel for $33,000, concluding that he “had not done even a minimal investigation” before filing suit. Counsel appealed. The Fifth Circuit vacated the order and remanded for further proceedings in which counsel would have “a greater opportunity to defend his pre-suit investigation.”
So far, so good. But the reply brief filed on behalf of the attorney in the Fifth Circuit contained number of misstatements, leading the court to suspect overreliance on generative AI. Accordingly, the court “issued a show-cause order, enumerating 16 issues of fabricated quotations and 5 additional serious misrepresentations of law or fact.”
The attorney who signed the reply brief responded, saying that she “relied on publicly available versions of the cases, which [she] believed were accurate.” Unimpressed, the Fifth Circuit requested further information, which led to a grudging admission of generative AI use to “help organize and structure” her argument. Now even less impressed, the court sanctioned the attorney $2,500, citing the court’s inherent power and Fed. R. App. P. 46(c), which addresses “conduct unbecoming a member of the bar.”
Unfortunately, despite vast improvements in generative AI technology since the introduction of ChatGPT in late 2022, and despite widespread publicity of similar cases across the country, this sort of situation continues to arise. The opinion offers three practical suggestions going forward.
Don’t Eat Soup with a Fork.
The opinion acknowledges that the use of generative AI “can be helpful if done properly and carefully.” Using an off-the-shelf, general purpose large language model such as ChatGPT for legal research is inviting trouble. Westlaw offers a high-quality research product that minimizes hallucinations by limiting the LLM’s focus to its database of case authorities, while providing hyperlinks to every case cited to make review easy. Other such products abound. A carpenter would not use a screwdriver instead of a hammer to drive a nail, and a conscientious lawyer should also not use the wrong tool for the wrong task.
Don’t Ignore Red Flags.
The Fifth Circuit’s show-cause order identified 21 material errors in the reply brief. Several of those errors involve repeated, erroneous citations to the same two cases. At minimum, when you cite the same case multiple times in a filing, you should take particular care to review it. More broadly, when an LLM repeatedly cites the same case for similar-sounding propositions (here, about the procedural requirements for a sanctions award), that’s a “red flag” about hallucination.
For whatever reason, the LLM has keyed on some combination of words in that opinion and is rearranging them in an attempt to provide a helpful response to the question that it has been asked. If an LLM’s response to a query seems “too good to be true”—that a case or two are unusually helpful or providing a “quote” that is amazingly on point — it is probably, in fact, too good to be true.
As the opinion reminds us, counsel should cite authority accurately at all times. But that doesn’t mean saving everything to the end. If something produced by generative AI just looks wrong, immediate attention is required to keep a solvable problem from growing out of control.
Admit the Obvious.
The court said, “Had Hersh accepted responsibility and been more forthcoming, it is likely that the court would have imposed lesser sanctions. However, when confronted with a serious ethical misstep, Hersh misled, evaded, and violated her duties as an officer of the court.”
Mistakes happen. Generative AI is a constantly evolving technology, and lawyers operate under great time pressure. Unfortunately, this is not the only case in which a lawyer has prevaricated after an obvious technological mistake, and courts become very frustrated very quickly when they receive evasive answers to straightforward questions. By all means, try to avoid this kind of problem in the first instance. But if you learn that you have made a mistake, own up to it and do not add insult to injury.
Generative AI is an extraordinarily powerful tool that can produce massive client benefits. But like any other powerful tool, it can become destructive when misdirected. Fletcher is a strong reminder about the importance of using this technology correctly.
The case is Fletcher v. Experian Info Solutions, No. 25-20086.
David Coale is an appellate partner at Lynn Pinker Hurst & Schwegmann.
