The U.S. Court of Appeals for the Fifth Circuit is considering a new certification requirement about the use of generative artificial intelligence. While the proposed rule is thoughtful and well-intentioned, it is largely redundant of existing rules and may invite satellite litigation that interferes with legitimate use of AI to provide quality client service.
The proposed rule, Rule 32.3, adds a sentence to the certificate of compliance required for any Fifth Circuit filing. Counsel would represent that “no generative artificial intelligence program was used in drafting the document presented for filing, or to the extent such a program was used, all generated text, including all citations and legal analysis, has been reviewed for accuracy and approved by a human.” The rule continues by warning: “A material misrepresentation in the certificate of compliance may result in striking the document and sanctions against the person signing the document.”
The rule is motivated by generative AI’s alarming tendency to “hallucinate”—or, in other words, make stuff up. The best-known example is Mata v. Avianca, Inc., in which a judge in the Southern District of New York recently sanctioned a lawyer for filing a brief “replete with citations to non-existent cases.” ChatGPT gave the lawyer purported quotes from several fictitious cases about federal preemption, and the lawyer did not double-check the quotes or cases.
But precisely because citation to “fake law” is such a serious matter, court rules and state ethical standards already prohibit it. Rules such as Fed. R. Civ. P. 11 require counsel to avoid frivolous arguments and make only good-faith arguments for the extension of existing law. And in Texas, professional-conduct rules 5.01 and 5.03 impose upon lawyers a duty of competence when representing a client, requiring them to review work created by others.
Mata also shows that current technology supports the application of that rule to generative AI. Opposing counsel and the court readily detected the fake citations by using conventional research databases.
What, then, does this new rule add? It applies to documents where “generative [AI] was used in drafting,” and requires a certification about the “accuracy” of any “generated text, including … legal analysis.” It thus appears to reach more broadly than false case citations and quotes, to include text that inaccurately analyzes citations that are otherwise accurate.
But here again, current practice addresses this topic. Every case has a winner and a loser. If a court disagrees with a party’s arguments about the merits, that party loses. That happens daily in every court in the country, without any certifications by counsel about any components of the parties’ submissions (other than the baseline rules such as Fed. R. Civ. P. 11 that require counsel to proceed in good faith).
It seems, then, that the rule addresses a concern that the general certifications made by counsel about a filing are inadequate whenever generative AI is involved — in other words, that generative AI, in and of itself, is uniquely prone to inaccuracy and thus requires a special certification.
But that raises two difficult practical questions. First, it’s not clear when the line is crossed between “regular” and “generative” AI. (Is it crossed when Westlaw “generates” additional search terms based on what counsel first identified? When Bing “generates” a summary of search results about a piece of legislation?)
Second, it’s not clear when a lawyer “uses” generative AI. (Is it when she considers a computer’s proposed language during a search and rejects it? Or incorporates it in a draft but then writes over it in later edits, retaining only a few words?) Compounding the difficulty in defining that line, recent experience has shown that software claiming to “detect” the use of generative AI is notoriously inaccurate.
This lack of clarity is important. Modern law practice requires the use of artificial intelligence. Westlaw, Lexis and Google Scholar all use artificial intelligence to help answer research queries — including queries to verify citations. (And that’s a good thing. Manual cite-checking is inaccurate and expensive — so much so that nobody, including courts, has seriously used it for years.)
And the functionality of widely used programs is constantly changing, including the addition of new “generative” features. In this environment, even the most conscientious attorney will have trouble knowing for sure what software may use a “generative” feature and how the software may do so if it does.
This lack of clarity is particularly problematic for a rule that allows a sanction for its violation. Just a few months ago, in the Tennessee case of Jones v. Bain Capital Private Equity, one side objected that the other had misused a word-processor setting about “double-spacing.” That picayune dispute was ultimately resolved by the judge telling the parties to find something else to do. But that case reminds that whenever there’s a rule — and a potential tactical advantage (here, getting the opponent’s brief stricken) — to proving its violation, zealous advocates will pursue that potential advantage.
Bain Capital involved a concept — “double-spacing” — that anyone with a ruler or standard word processor can measure. Rule 32.3 invites far more arcane disputes. What happens when a party moves for sanctions, based on the alleged use of a program that (arguably) involves generative AI capability, that was (arguably) used in drafting a document, and cites a report from AI-detection software with a sketchy track record?
In that situation, if the brief at issue cited a nonexistent case or advanced a wholly untenable reading of a real case, the lawyer responsible for it would be in trouble under longstanding rules and practice norms. Those standards have served the courts and litigants well for many years. An additional level of inquiry, into whether “generative AI” was “used” to prepare that brief, risks add complexity without a corresponding benefit.
David Coale leads the appellate practice at Lynn Pinker Hurst & Schwegmann LLP. Tvisha Jindal was his research assistant for this article.