David Horrigan: AI Case Law Update: The Lamborghini Doctrine of Hallucinations

Extract from David Horrigan’s article “AI Case Law Update: The Lamborghini Doctrine of Hallucinations”

When the misuse of generative AI in Mata v. Avianca Inc. made headlines in 2023, there was hope the widespread publicity about the AI “hallucinations” in the case might serve as a helpful warning.

Lawyers would learn the legal research lessons of Mata, understand how to use generative AI correctly, and that would be that.

Sadly, that has not been the case.

According to the database compiled by French researcher, Damien Charlotin, as of late July, there were over 230 legal matters around the world where fictitious legal citations generated by generative AI became an issue.

However, not unlike a high-powered Lamborghini operated by a new motorist without a seatbelt or an owner’s manual, recent cases illustrate a common theme in this avalanche of AI legal research gone wrong: the technology isn’t usually the issue—it’s how it’s used.

Unprecedented No More

In the context of AI and machine learning, a hallucination refers to the generation of often plausible-sounding—but potentially inaccurate or fabricated—information.

When faced with the hallucinated legal citations filed with his court in Mata, U.S. District Judge Kevin Castel wrote, “This court is presented with an unprecedented circumstance.”

Read more here

ACEDS