Extract from Kelly Twigger’s article “Episode 173: Fake Citations, Real Sanctions: AI Hallucinations Spark $31K in Sanctions”
Welcome to our Case of the Week segment of the Meet and Confer podcast. My name is Kelly Twigger. I am the Principal at ESI Attorneys, a law firm for ediscovery and information law, and the CEO and Founder at Minerva26, where we take the insights from our practice and provide a strategic command center for you to leverage the power of Electronically Stored Information (ESI). Thanks so much for joining me today. Case of the Week Episode 173
Our Case of the Week segment is brought to you by Minerva26 in partnership with ACEDS. Each week on this segment, I choose a recent decision in ediscovery case law and talk about the practical considerations of that decision for counsel to apply in their practice and for other legal professionals to know about and understand as they’re engaging in ediscovery.
This week’s decision hits on an issue that we are seeing regularly and it is a tremendous cause for concern and a wake-up call for law firms — the submission of briefs to a court that contain hallucinated citations from Generative AI. This week’s decision came out a few days before another high profile story broke that I covered on LinkedIn, in which lawyers for Anthropic, the Generative AI company that brought us Claude AI, filed a declaration from an expert AT Anthropic that is alleged to have contained a cite to a non-existent academic article to bolster the company’s arguments over alleged misuse of lyrics to train its Claude AI tool. If you don’t already know, Anthropic is an AI company whose tagline is “through our daily research, policy work and product design, we aim to show what responsible AI development looks like in practice.” #irony.