• Subscribe
  • Log In
  • Sign up for email updates
  • Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar
  • Skip to footer

The Texas Lawbook

Free Speech, Due Process and Trial by Jury

  • Appellate
  • Bankruptcy
  • Commercial Litigation
  • Corp. Deal Tracker/M&A
  • GCs/Corp. Legal Depts.
  • Firm Management
  • White-Collar/Regulatory
  • Pro Bono/Public Service/D&I

Defamed by a Llama — Legal Consequences of AI-Generated Falsehoods

May 29, 2025 Heath Cheek & Shane Thomas

There are many ways to be defamed: verbal rumors, print news stories, television news stories and social media posts, just to name a few. But now — based on a newly filed lawsuit against Meta for its “Llama” AI program — we have to add “defamation by artificial intelligence” to our lexicon. 

Complaint Filed by Robby Starbuck

Robby Starbuck, a prominent conservative social media commentator, alleges Meta’s artificial intelligence tool defamed him by falsely asserting that he participated in the Jan. 6, 2021, riot at the U.S. Capitol Building. As alleged in Starbuck’s complaint:

Imagine waking up one day and learning that a multi-billion-dollar corporation was telling whoever asked that you had been an active participant in one of the most stigmatized events in American history—the Capitol riot on January 6th, 2021—and that you were arrested for and charged with a misdemeanor in connection with your involvement in that event.

Further imagine that these accusations were completely false: that you were at your home in Tennessee on January 6th, and that you had never been accused of committing any crime in your entire life; in fact, you hadn’t received as much as a parking ticket in over a decade. But despite their utter baselessness, these false statements were widely believed because they were made by one of the most powerful and credible technology companies in the world.

Starbuck claims he quickly met with Meta’s managing executives and legal counsel, pleading for them to retract the accusations, investigate the case of the alleged error and implement precautions to prevent similar harm in the future to others. However, Starbuck claims Meta failed to take action after their meeting. Further, he claims Meta continued to spread false information about Starbuck for months. Specifically in April 2025, Starbuck was informed that a Meta AI voice feature had become available through Meta’s social media applications which falsely claimed that Starbuck had “pled guilty over disorderly conduct” on Jan. 6.

Starbuck brought a defamation per se claim seeking over $5 million. To prove defamation per se, the Delaware Supreme Court held in Page v. Oath Inc. that a plaintiff must show: “1) the defendant made a defamatory statement, 2) concerning the plaintiff, 3) the statement was published and 4) a third party would understand the character of the communication as defamatory.” The question here is who made the alleged defamatory statement? Starbuck alleges Meta’s AI feature Llama is responsible.  

How Does this Program Work?

Meta AI is a generative artificial intelligence chat platform owned by Meta that uses large language models (LLMs) to process natural language to provide intelligent responses in a chat, including follow-up responses that mimic human conversation in a sophisticated fashion. To power its features, Meta AI uses Llama, a series of large language models developed by Meta. Meta has produced several versions of Llama to date (including Llama 1, Llama 2, Llama 3, Llama 3.1, Llama 3.2, Llama 4, Llama 4 Maverick and Llama 4 Scout) and continues to develop new models. Later models of Llama are marketed by Meta as having improved accuracy, efficiency and/or capabilities as compared to earlier models.

Meta markets its Llama models as “a collection of pretrained and instruction-tuned mixture-of-experts LLMs offered in two sizes: Llama 4 Scout & Llama 4 Maverick. These models are optimized for multimodal understanding, multilingual tasks, coding, tool-calling, and powering agentic systems. The models have a knowledge cutoff of August 2024.”

Large language models are, by their very design, reliant upon material that is fed into it to reach its results. For example, if a LLM is fed only material from a single source (i.e., The New York Times), then the results from the LLM will reflect only a single point of view. If an LLM is fed from many different sources (i.e., The New York Times, Fox News, MSNBC, or The Wall Street Journal), then a more blended, balanced result should result. But an LLM cannot distinguish between truth or fiction. If an LLM is fed an article from an approved source, but that article turns out to be false, then the LLM will likely repeat the falsehood.

One of the major issues with defamation is how quickly it can spread on the internet. For example, one of the authors represented a reality TV star three years ago after an article that was published in Variety containing a multitude of defamatory statements about her. The story was published on a Friday afternoon and was summarized and republished by over twenty other media outlets (including the New York Post, Daily Mail and Entertainment Tonight) by the following Monday. All 20 stories came from a common source (the Variety article), but each new author put new spin, commentary or twists to the story. Then dozens of smaller blogs and Twitter (now X) users began republishing those stories.  My firm spent weeks sending cease and desist letters asking these publications to modify or take down their stories (most were successfully modified).  

With a LLM, however, depending on what sources it was fed, it will see 20 seemingly reliable media stories on a topic and assume its authenticity, thus, creating a new opportunity for republication. In fact, in preparing this article, I asked Llama about my previous case, and unfortunately it provided a summary of the allegations against my client containing the same slant and tone of the articles used in the various published articles. In other words, LLMs have a way of perpetuating previously debunked or resolved falsities.

What’s Next?

This case is one of the first legal battles over AI-generated defamation, signaling the possibility of major concerns for Meta and other competitors in artificial intelligence, such as OpenAI’s ChatGPT, Google’s Gemini, xAI’s Grok, Anthropic’s Claud or Microsoft’s Copilot. As these programs continue to grow, questions about responsibility for false or damaging claims are quickly moving from hypothetical debates to courtroom litigation, with Starbuck being placed at the forefront.

Here, for example, it is unknown what source was fed into Meta’s Llama that led it to declare that Starbuck was arrested on Jan. 6. Even though Meta was not the originator of the false claim, it allowed Llama to republish the false claim and so is still liable for the defamation. In Delaware, as the state’s high court ruled in Short v. News-Journal Co., the general rule is, “the publisher and re-publisher of defamatory matter are strictly accountable and liable in damages to the person defamed, and neither good faith nor honest mistake constitutes a defense, serving only to mitigate damages.”

It is also unknown what level of scrutiny Llama’s statements will be provided by the courts. Is it treated as a media source, entitling it to the heightened standards awarded to media?  Does it matter that Llama’s algorithm was digesting and republicizing a media source? Again, the law in Delaware makes it clear that one is strictly liable for republishing a defamatory statement.

Since the filing of the lawsuit, Joel Kaplan, Meta’s chief global affairs officer, has since issued a public apology to Starbuck. Kaplan states, “Robby — I watched your video — this is unacceptable. This is clearly not how our AI should operate. … We’re sorry for the results it shared about you and that the fix we put in place didn’t address the underlying problem. … I’m working now with our product team to understand how this happened and explore potential solutions.” But Starbuck refuses any apology, claiming it is far too late.

No matter the statement by META, the bigger question remains: How will artificial intelligence be examined moving forward? Since the filing of this lawsuit, Meta AI has made it more difficult to search for information about Starbuck, with the Wall Street Journal reporting this response: “Sorry, I can’t help you with the request right now.”

Whether Starbuck’s case succeeds or not, it outlines the start of a new era where courts will increasingly be asked to determine where responsibility lies when AI crosses legal lines. One thing is abundantly clear, AI, free speech and defamation will be analyzed with a closer eye than ever.

Heath Cheek is a complex commercial litigation partner at Bell Nunnally. He can be reached at hcheek@bellnunnally.com.

Shane Thomas is a litigation associate at Bell Nunnally. He can be reached at sthomas@bellnunnally.com.

©2025 The Texas Lawbook.

Content of The Texas Lawbook is controlled and protected by specific licensing agreements with our subscribers and under federal copyright laws. Any distribution of this content without the consent of The Texas Lawbook is prohibited.

If you see any inaccuracy in any article in The Texas Lawbook, please contact us. Our goal is content that is 100% true and accurate. Thank you.

Primary Sidebar

Recent Stories

  • Judge Declares Trump EO Against Susman Godfrey Unconstitutional and Retaliatory
  • SCOTX Wipes Out $116M Judgment Against Werner in Fatal Crash Case 
  • Lease Operator Owns ‘Produced Water,’ SCOTX Says
  • SCOTX: Winter Storm Uri Lawsuits Seeking Billions of Dollars Narrowed But Still Alive
  • DNOW Acquires MRC for $1.5B

Footer

Who We Are

  • About Us
  • Our Team
  • Contact Us
  • Submit a News Tip

Stay Connected

  • Sign up for email updates
  • Article Submission Guidelines
  • Premium Subscriber Editorial Calendar

Our Partners

  • The Dallas Morning News
The Texas Lawbook logo

1409 Botham Jean Blvd.
Unit 811
Dallas, TX 75215

214.232.6783

© Copyright 2025 The Texas Lawbook
The content on this website is protected under federal Copyright laws. Any use without the consent of The Texas Lawbook is prohibited.