• Subscribe
  • Log In
  • Sign up for email updates
  • Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar
  • Skip to footer

The Texas Lawbook

Free Speech, Due Process and Trial by Jury

  • Appellate
  • Bankruptcy
  • Commercial Litigation
  • Corporate Deal Tracker
  • GCs/Corp. Legal Depts.
  • Firm Management
  • White-Collar/Regulatory
  • Pro Bono/Public Service/D&I

Competent Testifying Expert Supervision Is Required in the Age of AI

November 10, 2025 George M. Padis

After the 2022 public release launch of ChatGPT dramatically and rapidly expanded the attention on and use of artificial intelligence tools and large language models across nearly all industries, courts and legal observers have rightly zeroed in on lawyers improperly or carelessly using AI tools to draft briefs containing hallucinated case citations.  For example, a recent amendment to the Local Civil Rules for the U.S. District Court for the Northern District of Texas (7.2(f)) requires that a legal “brief prepared using generative artificial intelligence must disclose this fact on the first page” of the brief.

But a perhaps even greater risk is that presented by retained testifying experts using AI or LLMs to draft expert reports, in that expert AI misuse may be harder for counsel to detect and police. And courts have (at least until recently) focused principally on generative AI used by attorneys for legal citations but not on expert reports. The Northern District of Texas’s local rule mentioned above, for example, mandates disclosure for legal briefs but is silent as to expert reports.

Of course, litigation counsel can and should monitor their own use of AI tools and readily recognize a case citation that’s unusual or “too good to be true” — as part of the attorney’s or paralegal’s review and verification of each legal citation. But it is much more difficult for a trained lawyer to recognize AI-generated hallucinations when AI tools are placed in the hands of outside experts testifying on specialized non-legal subjects. 

Consider Principia Structura: A Comprehensive Treatise on the Harmonization of Load-Bearing Systems in Civil Infrastructures, by Dr. Harold W. Brensworth, P.E., F.R.C.S.E., reissued 1984 by the Midwestern Institute of Structural Harmonization. I asked Chat GPT to hallucinate this treatise, which sure sounds authoritative and appears legitimate (at least to me). When buried in a footnote or endnote or appearing after a sentence in the middle of a dense paragraph, one could hardly blame an attorney — busily juggling various tasks with an expert-disclosure deadline fast approaching — for glancing over a citation referencing Dr. Brensworth’s (fake) tome without recognizing the source is entirely fabricated.

The fallout can be severe: disqualification of counsel, prejudice to the client’s case and ethical and monetary sanctions. In a recent False Claims Act matter, an expert used AI tools to fabricate (among other things) sworn testimony of a federal agency, prompting a motion that may lead to attorney disqualification and even dismissal of the entire case. The pending motion highlights an important issue for litigators to consider in engaging experts.

This article summarizes the pending motion and suggests potential solutions and best practices to avoid this issue biting counsel and clients and to protect clients and counsel in the event an expert goes rogue with AI.

The case of the hallucinating expert: United States ex rel. Khoury v. v. Mountain West Anesthesia, LLC

In Khoury, the relator filed a qui tam FCA case alleging anesthesiologists improperly used personal electronic devices — i.e., being on their phones — during surgical procedures reimbursed by Medicare. The United States declined to intervene, and the qui tam civil action proceeded to discovery. In a deposition, a designated representative of the Centers for Medicare & Medicaid Services testified that an anesthesiologist’s PED use does not affect Medicare payment determinations — a key issue for an FCA claim premised on allegedly false claims submitted to CMS for reimbursement under Medicare. After fact discovery, in an apparent effort to salvage the relator’s claims, the relator designated “an expert on issues relating to pertinent governmental standards and practices.”

Unfortunately for the relator, the defendants uncovered that the expert had used ChatGPT to draft portions of the report and that, as a result, the report:

  • Fabricated CMS deposition testimony central to its payment policy;
  • Invented quotations from Medicare and Medicaid program manuals, including a nonexistent “Nevada Medicare Provider Manual,” and miscited regulations;
  • Misquoted and mischaracterized industry publications, including retitling the American Society of Anesthesiologists Statement on Distractions as a joint ASA/Anesthesia Patient Safety Foundation statement and fabricating a quote; and
  • Included other superficial, repetitive content consistent with AI generation.

At his deposition, the expert initially denied but then admitted using AI. He further admitted he had not preserved the prompts or AI outputs.

Defendants move for sanctions, attaching a declaration from a Boston University law professor that helpfully discusses ethical and appropriate uses of AI

On Aug. 13, the defendants moved for sanctions including exclusion of the expert’s testimony, attorney’s fees and costs, disqualification of relator’s counsel and, critically, disqualification of the relator, which would effectively end the case. In their motion, the defendants summarized the ever-growing number of recent cases of AI fabrications and hallucinations that have held monetary sanctions are insufficient to deter AI fabrications in legal proceedings. One such court observed, “if fines and public embarrassment were effective deterrents, there would not be so many cases to cite.” The defendants noted that the consequences had spilled over into another of relator’s expert reports, which had in turn cited the problematic report. Defendants emphasized that the relator stands in the government’s shoes, such that the relator’s expert’s having fabricated CMS testimony in an FCA case undermined the United States’ interests. The defendants invoked ethical rules of professional conduct 3.3(a)(1) and (b) (duty of candor and prohibition on offering false testimony), 3.4(b) (assisting false testimony) and 8.4(c) and (2) (dishonesty and prejudice to justice), as well as the court’s inherent authority to sanction bad faith or reckless litigation conduct.

Professor Nancy J. Moore, an ethics scholar, analyzed the conduct of relator’s counsel under the rules of professional conduct and the court’s inherent authority. She concluded that relator’s counsel breached their duties of candor to the tribunal and fairness to opposing parties by submitting an expert report containing obvious fabrications, including invented CMS testimony and a fictitious ASA/APSF joint statement. Moore explained that, under the rules, knowledge can be inferred from circumstances and that willful blindness — where a lawyer is aware of a high probability of falsity but fails to investigate — can be treated as knowledge. She identified numerous red flags that should have alerted counsel to the fabrications, such as uncited or invented sources and formatting anomalies typical of AI-generated text. Moore also found that counsel’s subsequent misstatements to the court, efforts to minimize the misconduct and shifting of blame to others supported a finding of bad faith or at least reckless disregard. She opined that, in such cases, courts should focus on deterrence and the integrity of the judicial process and that disqualification is an appropriate sanction where lesser remedies have proven inadequate.

Notably, Moore does not suggest that AI use is per se improper. But her analysis implies that if AI is used, it must be subject to rigorous verification, robust human oversight, full transparency with the court and opposing parties, and supervision to ensure that no false or fabricated information is presented to the court. She emphasizes that attorneys are responsible for reviewing and verifying expert reports for accuracy and completeness under their ethical and procedural obligations, even if they were not directly informed that the expert had used AI tools in generating the report.

Key takeaways and practical problems illustrated by the Khoury case

AI misuse is not confined to lawyer-drafted filings. Unvetted expert use of AI is emerging and raises acute risks, because the expert-disclosure rules of civil procedure and expert-supervision duties rest on the party and counsel, not on the expert. Courts have expressed increasing willingness to impose serious remedies including disqualification of counsel, fee shifting, referrals to the applicable state’s bar and potential case-dispositive consequences — especially where counsel delays remediation, minimizes the misconduct or shades the record.

In addition to ethical considerations and procedural rules for the admissibility of expert testimony, there is a practical trial problem for experts who rely on AI: cross-examination. Imagine opposing counsel eliciting, either in deposition or — worse — at trial, that an expert relied on previously undisclosed AI tools for his or her opinions, including hallucinated facts or authorities. The results for your client’s case could be devastating. The expert would probably lose all credibility with the judge or jury on a key issue. That loss of credibility would inevitably rub off on the attorney and potentially also the client. In short, this error could sink an entire case.

What lawyers should do now: contractual controls, supervision and disclosure

In engaging testifying experts, litigation counsel should give serious consideration to control, limit, or perhaps prohibit the use of generative AI or LLMs for use by experts in drafting, research, summarization or citation unless counsel grants prior written approval specifying tool, scope and safeguards. For instance, an engagement letter with a testifying expert could prohibit all use of generative AI or LLMs or, by contrast, approve use only with:

  • Identified tools, version and vendor (preferring enterprise tools with contractual confidentiality);
  • Permissible tasks (e.g., brainstorming keyword searches as opposed to drafting text);
  • Prohibition on inputting any client confidential information, PHI, PII or non-public discovery into public tools; and
  • Mandatory human verification of all citations, quotations, transcripts, regulations and titles.

If AI is used, litigation counsel could further require the expert to preserve and produce upon request all AI “artifacts,” including prompts, settings, outputs, intermediate chats and files, as “facts or data considered” under Rule 26(a)(2). An engagement letter could require certifications that:

  • No AI or LLM was used unless expressly approved in writing;
  • For any approved use, the expert identifies each AI tool and the documents and data provided to, or generated by, the tool;
  • The expert or staff verified every quotation, citation and factual assertion to the original source and verified each cited authority; and
  • All AI artifacts, drafts and research materials have been preserved.

Use of public tools such as ChatGPT may implicate security and confidentiality concerns. Thus, an engagement agreement could prohibit uploading of protected materials to public tools, ban sharing of client data with model trainers or restrict AI-approved uses to enterprise-grade solutions with contractual confidentiality, access controls and logs.

In-house attorneys who supervise litigation may also wish to insist on such terms being made part of engagement agreements with testifying experts.

Even in the best-case scenario with an engagement letter specifying prohibited and appropriate uses for AI and LLMs, litigation counsel must remain vigilant in the AI age. For example, litigation counsel should review drafts for red flags typical of AI use, such as:

  • Overconfidence without evidence, such as the use of the terms “clearly,” “fundamentally” or it is “well established that” when the attorney knows or suspects the claim is actually obscure or debatable;
  • A “too good to be true” theory or proposition that you’ve not previously encountered;
  • Mistitled documents;
  • Citation-free assertions and encyclopedic tone or style (i.e., reads like a Wikipedia article without the “citation needed” flags);
  • Repetitive text and sentence structures;
  • Unusual jargon;
  • Inconsistent nomenclature uses (e.g., switches from petitioner to plaintiff or respondent to defendant incorrectly or without explanation);
  • Overuse of em dashes and bullet points (something I’ve noticed, perhaps because I too like to use lots of em dashes and bullet points); and
  • Inaccurate characterization or citation to transcripts or legal authorities.

If secondary sources are cited, require the expert to produce a copy of the cover, copyright page and the cited page in the textbook, article or treatise (a best practice anyway). Spot-checking propositions is always a best practice, but repeated errors in citation or attribution should be a giant “red flag” in the AI age that an AI tool or LLM was used to generate the report.

Conclusion

The Khoury filings illustrate the front end of a new litigation risk: expert reports contaminated by unapproved or unverified AI use, with fabrications that go straight to the heart of the case. Courts are increasingly expressing willingness to impose sanctions beyond monetary ones, including disqualification, case-dispositive remedies and referral to professional disciplinary authorities.

Counsel can and should control this risk now with clear engagement terms, preservation and disclosure of AI artifacts when use is permitted and the rigorous supervision of experts. Appropriate supervision of testifying experts by competent litigation counsel is perhaps more important now than ever with the proliferation and acceptance of AI tools across all industries.

George M. Padis is a partner at Sbaiti & Company PLLC, where his practice focuses on the commercial litigation and False Claims Act cases. In his previous role, George served as an assistant U.S. attorney, where AI was generally unavailable. That being said, AI was used to assist in the editing and revising of this article; however, all hallucinations contained above are the author’s own.

©2026 The Texas Lawbook.

Content of The Texas Lawbook is controlled and protected by specific licensing agreements with our subscribers and under federal copyright laws. Any distribution of this content without the consent of The Texas Lawbook is prohibited.

If you see any inaccuracy in any article in The Texas Lawbook, please contact us. Our goal is content that is 100% true and accurate. Thank you.

Primary Sidebar

Recent Stories

  • Led By Kirkland, Four Law Firms Dominated $1B+ Texas-Led M&A Dealmaking in 2025
  • CDT Roundup: Slow Start, Big Punch: 18 Deals, Nearly $9B
  • Children’s Health Assoc. GC Kathleen Benner’s ‘Impact will be Felt for Years to Come’
  • Premium Subscriber Q&A: Kathleen Benner
  • Over Hill, Over Dale: Hobby Picked up During Army Service Carries Houston Lawyer to Mongolia 

Footer

Who We Are

  • About Us
  • Our Team
  • Contact Us
  • Submit a News Tip

Stay Connected

  • Sign up for email updates
  • Article Submission Guidelines
  • Premium Subscriber Editorial Calendar

Our Partners

  • The Dallas Morning News
The Texas Lawbook logo

1409 Botham Jean Blvd.
Unit 811
Dallas, TX 75215

214.232.6783

© Copyright 2026 The Texas Lawbook
The content on this website is protected under federal Copyright laws. Any use without the consent of The Texas Lawbook is prohibited.