• Subscribe
  • Log In
  • Sign up for email updates
  • Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar
  • Skip to footer

The Texas Lawbook

Free Speech, Due Process and Trial by Jury

  • Appellate
  • Bankruptcy
  • Commercial Litigation
  • Corporate Deal Tracker
  • GCs/Corp. Legal Depts.
  • Firm Management
  • White-Collar/Regulatory
  • Pro Bono/Public Service/D&I

Your Client’s ChatGPT Strategy Session Just Became Government Evidence

February 18, 2026 Chris Schwegmann

It started the way these things often do: a sophisticated executive, a federal investigation closing in and the very human impulse to get ahead of the problem.

Benjamin Heppner, a financial services executive charged with defrauding investors at GWG Holdings — a Dallas-based, publicly traded company — had hired Quinn Emanuel, one of the most formidable defense firms in the country.

But before leaning on his lawyers, he did something that millions of people do every day without a second thought. He opened an AI chatbot and started thinking out loud.

Over the following weeks, Heppner used the consumer version of Anthropic’s Claude to analyze his legal exposure, stress-test potential defenses and draft approximately 31 strategy documents laying out arguments he might make in anticipation of an indictment. He later sent the materials to his attorneys, presumably treating them as a carefully organized briefing book. To anyone watching, it might have looked like exactly the sort of proactive preparation that good lawyers encourage.

It wasn’t. On Feb. 17, Judge Jed S. Rakoff of the Southern District of New York ruled that every one of those documents is fair game for federal prosecutors. Not protected by attorney-client privilege. Not shielded by the work product doctrine. Fully discoverable. In a written memorandum, the court was characteristically direct: The AI documents fail to satisfy “at least two, if not all three” elements required for privilege to attach. The ruling appears to be the first of its kind in the country. The answer to whether AI-generated legal strategy is privileged, unambiguously, is no.

For Texas lawyers and the corporate clients they serve, the implications are immediate. Texas is home to more Fortune 500 headquarters than any state except California, a sprawling energy sector accustomed to regulatory scrutiny and some of the most active federal dockets in the country. If your clients are using AI to think through legal problems before they call you, United States v. Heppner is a five-alarm warning.

Why the Judge Said No — And Why the Reasoning Is Airtight

Judge Rakoff did not break new legal ground. He applied rules that have governed privilege disputes for decades to facts no court had previously encountered. That combination — settled doctrine, novel behavior — is precisely what makes the ruling durable and difficult to argue around on appeal.

Claude Is Not Your Lawyer

Attorney-client privilege protects communications with a licensed attorney who owes fiduciary duties and is subject to discipline.

Claude is none of those things. Anthropic’s own terms make it explicit: Claude does not provide legal services and using the platform creates no attorney-client relationship.

When the government asked Claude directly whether it could provide legal advice, it responded that it could not and recommended consulting a qualified attorney. That response was submitted as an exhibit and cited in the court’s opinion — the AI itself, in real time, disclaimed the very role the defendant was claiming it had performed.

The Terms of Service Killed Confidentiality

Privilege is also waived when a client discloses communications to a third party not bound to keep them secret.

Anthropic’s privacy policy — which users agree to as a condition of using Claude — states that the company collects users’ inputs and Claude’s outputs, uses that data to train the model, and reserves the right to disclose it to third parties, including governmental regulatory authorities.

The moment Heppner typed his legal strategy into the chat window, he disclosed it to a party whose written policies put him on clear notice that the information was not confidential. Most users of consumer AI tools — ChatGPT, Claude, Gemini, Microsoft Copilot — have never read those terms. Heppner makes the terms of use legally consequential.

Forwarding It to Your Lawyer Doesn’t Fix It

Even assuming Heppner created the documents to share with counsel — and even though he ultimately did — that transmission could not retroactively transform unprivileged documents into privileged ones.

This is the “no alchemy” rule: A document not privileged at the moment of creation does not acquire protection by being handed to an attorney afterward. The privilege inquiry looks backward. If the document wasn’t protected then, it isn’t protected now.

The work product doctrine failed Heppner for the same essential reason: His own lawyers confirmed on the record that the AI documents were prepared by Heppner on his own volition, without counsel’s direction and did not reflect counsel’s strategy at the time of creation.

What Texas Lawyers Should Be Telling Clients Right Now

The Heppner case calls for action now. The ruling is clear, the reasoning is durable, and the behavior it describes is happening right now across Texas. Here is the practical guidance that should be flowing from outside counsel to clients, and from general counsel to business leadership, in the weeks ahead.

The Simple Test

Give clients a heuristic simple enough to remember under pressure: If you would not write it in an email that could be subpoenaed, do not type it into a consumer AI platform. A consumer AI conversation is, for privilege purposes, functionally equivalent to an email sent to an unrepresented third party. The terms say inputs can be logged. The privacy policy says they can be shared. The platform disclaims providing legal advice. Whatever sense of privacy the interface creates is an illusion courts will not honor.

The Pre-Representation Danger Zone

The most perilous period in any legal matter is the interval between when a client first suspects trouble and when outside counsel is formally engaged. Clients routinely reach for AI during exactly that window — researching exposure, drafting timelines, testing defenses. Heppner holds that everything created during that period on a consumer AI tool is almost certainly discoverable. The instruction is direct: before you open the chatbot, call your lawyer. That call is privileged. The chat session is not.

Address the ‘Shadow Lawyer’ Habit Inside the Company

The Heppner problem lives inside corporate legal departments, in the habits that have quietly formed as AI tools have become ubiquitous. The HR director who asks AI to evaluate a termination decision, the compliance officer who stress-tests a regulatory position in a chat window, the manager who types an employee complaint into ChatGPT before escalating to legal — each of those conversations is potentially discoverable and none is privileged. General counsel need to draw an explicit line: General business tasks are a different matter from sensitive legal analysis, and the latter must flow through counsel using tools that operate under contractual protections the company has actually reviewed.

The Enterprise AI Distinction — and Its Limits

Enterprise AI tools deployed under negotiated agreements that exclude client data from model training and restrict third-party disclosure present a meaningfully lower risk profile than consumer platforms. But enterprise AI is not a safe harbor. It addresses the confidentiality problem and potentially supports a work product argument — it does not, by itself, create attorney-client privilege.

The safest configuration is one in which counsel selects the tool, directs its use and can demonstrate that the AI-assisted analysis reflects the lawyer’s mental processes rather than the client’s unilateral activity. Texas GCs negotiating enterprise AI contracts should push for explicit, enforceable data-protection terms and involve outside counsel in those negotiations before agreements are signed.

Update Litigation Holds Now

Most corporate litigation hold protocols were drafted before generative AI became a fixture of daily work and do not capture AI chat logs, prompt histories or exported AI-generated documents. That gap is legally significant in a post-Heppner world. General counsel should review and update their standard hold templates immediately — and assess document retention policies to address how long AI interaction logs are retained, where they are stored, and who has access to them.

The Broader Picture

Heppner cuts both ways in litigation. Employees who used AI to research their legal rights, draft demand letters or strategize about claims have generated the same category of unprotected material that sank Heppner. Texas employment litigators and corporate defense counsel should be building AI discovery into their standard playbooks now — treating AI chat logs as a routine category of electronically stored information alongside emails and text messages.

Conclusion: The Chatbot Is a Third Party. Act Accordingly.

There is a version of this story that ends differently. Had Heppner’s lawyers directed him to use Claude as a documented component of the representation, Judge Rakoff suggested the outcome might have been different. But Heppner had a consumer chatbot, a standard privacy policy and the understandable but legally fatal impulse to prepare himself before calling his lawyers. Thirty-one documents later, the government had everything he had been thinking.

That is the story Texas lawyers need to be telling — to clients, corporate leadership, HR departments, compliance officers and every executive who treats an AI chatbot as a private space for working through difficult problems. The privacy the interface appears to offer is not real. The confidentiality the conversation feels like it carries does not exist. And the privilege a person might reasonably expect when the subject matter is their own legal exposure will not be recognized by a court applying rules that have governed this area of law for generations.

Update your litigation holds. Audit your organization’s AI usage. Establish governance protocols that route sensitive legal analysis through counsel. Train senior leadership with the specific facts of this case — not with abstractions about AI risk but with the story of a sophisticated executive, a federal judge and 31 documents that could not be taken back. And counsel every client, in every engagement where legal exposure is on the table, to call their lawyer before they open the chatbot.

Chris Schwegmann is the managing partner of Lynn Pinker Hurst & Schwegmann.

©2026 The Texas Lawbook.

Content of The Texas Lawbook is controlled and protected by specific licensing agreements with our subscribers and under federal copyright laws. Any distribution of this content without the consent of The Texas Lawbook is prohibited.

If you see any inaccuracy in any article in The Texas Lawbook, please contact us. Our goal is content that is 100% true and accurate. Thank you.

Primary Sidebar

Recent Stories

  • Your Client’s ChatGPT Strategy Session Just Became Government Evidence
  • In Ex-VP’s Race Discrimination Case Against Exxon, Business Court’s Constitutionality Challenged
  • TI, Intel, AMD and Mouser Successfully Move Ukrainian Lawsuits to Federal Court
  • David Nemecek Bolts Kirkland for Simpson Thacher, Which Intends to Open a Dallas Office
  • Dallas Business Court Jury Trial Ends with Directed Verdict

Footer

Who We Are

  • About Us
  • Our Team
  • Contact Us
  • Submit a News Tip

Stay Connected

  • Sign up for email updates
  • Article Submission Guidelines
  • Premium Subscriber Editorial Calendar

Our Partners

  • The Dallas Morning News
The Texas Lawbook logo

1409 Botham Jean Blvd.
Unit 811
Dallas, TX 75215

214.232.6783

© Copyright 2026 The Texas Lawbook
The content on this website is protected under federal Copyright laws. Any use without the consent of The Texas Lawbook is prohibited.