Extract from Cassandre Coyer’s article “New Learning Curve: Knowing When and How to Pick Generative AI”
As soon as legal professionals started using ChatGPT at home to get help with dinner plans or vacation itineraries, the legal industry started exploring how it could leverage generative artificial intelligence for its own purposes.
Since then, some applications have emerged as better fits for the technology than others. But it’s early in the technology’s lifetime, and determining clear use cases is still a challenge for many firms and companies who are unsure where to start.
On Friday, a panel of experts attempted to offer some answers during the “Tips for Adopting Generative AI” webinar from Debevoise & Plimpton, discussing when to use generative AI, which large language models (LLMs) to pick and how to determine risks.
Is Gen AI Always Better Than Good Old AI?
As a starting point, Matt Kelly, counsel at Debevoise & Plimpton and member of the firm’s data strategy and security group, advised organizations to identify low-risk, high-value generative AI uses first.
To do so, he offered some criteria, such as looking for cases dealing with large volumes of data that are too diffused for humans to process; areas of human limitations where the task may be too “boring”; and areas of non-zero error tolerance, where it might be OK for the output to vary.