Extract from Stephanie Wilkins’s article “11th Circuit Judge Uses ChatGPT in Deciding Appeal, Encourages Others to Consider It”
Since ChatGPT took the world by storm, there’s been no shortage of stories about lawyers behaving badly and using it to submit hallucinated cases to the court, and courts, in turn, issuing standing orders to restrict its use. Far fewer are the stories of judges embracing the use of generative artificial intelligence, at least in the U.S.
Judge Kevin Newsom of the U.S. Court of Appeals for the Eleventh Circuit might have just flipped the script.
In a move he admitted might be seen by the profession as “heresy” or “unthinkable,” Newsom not only admitted to using ChatGPT, but wrote a 32-page concurrence outlining exactly how. In the May 28 opinion, Newson laid out his use of the generative AI chatbot to help inform his analysis of a key issue in an insurance appeal, and encouraged the legal community to consider following his lead.
It starts:
“I concur in the Court’s judgment and join its opinion in full. I write separately (and I’ll confess this is a little unusual1) simply to pull back the curtain on the process by which I thought through one of the issues in this case—and using my own experience here as backdrop, to make a modest proposal regarding courts’ interpretations of the words and phrases used in legal instruments.
Here’s the proposal, which I suspect many will reflexively condemn as heresy, but which I promise to unpack if given the chance: Those, like me, who believe that “ordinary meaning” is the foundational rule for the evaluation of legal texts should consider—consider—whether and how AI-powered large language models like OpenAI’s ChatGPT, Google’s Gemini, and Anthropic’s Claude might—might—inform the interpretive analysis. There, having thought the unthinkable, I’ve said the unsayable.