Cassandre Coyer: Hallucinations Are Legal’s Main Concern With Generative AI, But Maybe Not For Long

Extract from Cassandre Coyer’s article “Hallucinations Are Legal’s Main Concern With Generative AI, But Maybe Not For Long”

When the Avianca airline serving cart allegedly bumped into the knee of Roberto Mata on a flight to New York four summers ago, few could have predicted that the lawsuit that ensued would soon become the prime example of how generative artificial intelligence tools can hallucinate

Now, six months after the New York lawyer involved in the Mata v. Avianca case submitted a ChatGPT-written brief with fake case citations to the court, the list of risks that the legal industry is worried about has grown, with considerations around bias and cybersecurity threats inching their way to the top.

A poll of legal, risk and compliance professionals from Deloitte shared exclusively with Legaltech News found that while organizations are increasingly tapping their legal professionals to consult on the legal risks of using AI, they remain split on whether they’ll increase their use of the technology over the next year.

“There were no surprises per se in that this is such a new area that we really had no expectation of what people were going to say. And so this serves as a good baseline to start out and take it from there,” noted Mike Weil, managing director and Digital Forensics leader in the Discovery practice of Deloitte Financial Advisory Services. “The good news in this is that it’s pretty clear that lawyers are very engaged within their organizations with assessing the risk around generative AI and AI in general.”

Read more here

ACEDS