Not long after a New York attorney was discovered to have filed in federal court ChatGPT-generated briefing — that included fictitious citations and judicial decisions fabricated out of whole cloth — U.S. District Judge Brantley Starr this week issued a standing order that aims to prevent that from happening in his courtroom.
The move has been praised by lawyers who spoke to The Lawbook as a necessary reminder of a lawyer’s professional obligations, particularly in the absence of any statewide or nationwide guidance on the use of artificial intelligence by legal professionals.
“AI is coming and we need to start having some guardrails and processes around it,” said Todd Mensing, partner at Ahmad Zavitsanos & Mensing in Houston. “I think this order does a good job of accounting for the fact that it’s a reality, while ensuring AI products have sufficient human oversight. AI doesn’t have ethics, people do. That’s why we need controls.”
It’s an issue that the Dallas Bar Association is closely watching, said President Cheryl Camin Murray.
“We don’t have a formal policy on AI or the use of AI, but we’re monitoring it and how it may impact our profession and members and are aware of the concerns with how it may have an impact on the professional licensure of attorneys and how they practice,” she said.
Murray, who is also a partner at Katten Muchin Rosenman and serves on the firm’s AI committee, said she’s also been monitoring the technology in her private practice.
“Law firms are putting together AI practice groups and committees to help clients in different industry and practice areas stay up to date on forthcoming statutory and regulatory requirements on a state and federal basis,” she said. “It’s a very hot topic, and I know firms across the board are focused on this and are trying to keep up with this.”
Judge Starr’s order requires every attorney appearing before him to file a document attesting either that no portion of any filing has been generated by artificial intelligence or that anything drafted by AI has been checked for accuracy “by a human being.”
“These platforms are incredibly powerful and have many uses in the law: form divorces, discovery requests, suggested errors in documents, anticipated questions at oral argument. But legal briefing is not one of them,” Judge Starr wrote. “Here’s why. These platforms in their current states are prone to hallucinations and bias. On hallucinations, they make stuff up — even quotes and citations. Another issue is reliability or bias. While attorneys swear an oath to set aside their personal prejudices, biases, and beliefs to faithfully uphold the law and represent their clients, generative artificial intelligence is the product of programming devised by humans who did not have to swear such an oath. As such, these systems hold no allegiance to any client, the rule of law, or the laws and Constitution of the United States (or, as addressed above, the truth). Unbound by any sense of duty, honor, or justice, such programs act according to computer code rather than conviction, based on programming rather than principle.”
Judge Starr also posted on his website a link to a template document that would satisfy the requirements of the standing order.
“Accordingly, the Court will strike any filing from an attorney who fails to file a certificate on the docket attesting that the attorney has read the Court’s judge-specific requirements and understands that he or she will be held responsible under Rule 11 for the contents of any filing that he or she signs and submits to the Court, regardless of whether generative artificial intelligence drafted any portion of that filing,” Judge Starr wrote.
Federal Rule of Civil Procedure 11 — which details requirements for making representations to the court and sanctions that could arise for violating the rule — already covers the kind of conduct that allegedly took place in the New York case, said David Coale, partner at Lynn Pinker Hurst & Schwegmann in Dallas.
“But this is a whole new world. This is very different from the type of things we typically worried about,” he said. “It would be a good idea for Rule 11 to expressly deal with the use of AI. I think Judge Starr’s order essentially does that.”
Coale said he hopes nationwide and statewide guidance will come soon.
“It’s not mission critical, but it’s pervasive. It’s a national issue. It needs a national standard and not just this judge here and that judge there,” he said.
Still, he has found some use for ChatGPT in his legal work, he said, recounting how a law partner asked the platform to come up with “folksy” examples that would help explain to a mock jury the concept of a put option, which gives someone the right to sell stock at a specific price.
“ChatGPT came up with four or five examples, analogies, that he would have spent an hour brainstorming, and it took seconds,” Coale said. “It’s helpful in brainstorming … but as soon as you get into comparing X case to Y case, it just can’t handle that level of detail yet.”
On the other end of that spectrum, Dallas appellate lawyer Chad Ruback hasn’t experimented with ChatGPT or other AI platforms. News of what happened in the New York lawsuit — a personal injury lawsuit brought by Roberto Mata against Avianca Airlines — was the first he heard about the product’s propensity to “hallucinate,” he said.
“I hope other lawyers will learn from this lawyer’s mistake, and Judge Starr’s order is yet one more reminder of how important it is for a lawyer to check his or her own work before filing,” he said. “Judge Starr’s order confirming compliance might be enough to get a lawyer, who might otherwise be cutting some corners, to pause and think about whether that corner should be cut.”
The attorneys involved in the New York case — Steven Schwartz, Peter LoDuca and their law firm Levidow, Levidow & Oberman — are set to appear before U.S. District Judge P. Kevin Castel for a show cause hearing June 8.
The case number is 1:22-cv-01461.