Extract from Aidan Randle-Conde’s article “Leveraging Generative AI in eDiscovery: The Art and Science of Prompt Engineering”
The use of generative AI in eDiscovery is opening new avenues for efficiency and precision. But, as is often the case with powerful tools, the devil is in the details. A significant part of those details? Prompt engineering. Let’s take a look.
What is Prompt Engineering?
We use a “prompt” to interact with a Large Language Model (LLM). LLMs are AI models that have been trained on billions of documents and then used to generate text. In AI lingo, a ‘prompt’ is a query or instruction that tells the LLM what to generate or how to respond. Think of it as a refined search query written in plain language that the average person can understand. Generally, the more precise and relevant your prompt is, the more accurate and useful the AI-generated results will be.
A key part of the process of Spotlight AI developed here at Hanzo relies on prompt engineering to help our users separate out the responsive from non-responsive content in a scalable and transparent manner. It really helps break down the complexity of a case into simple facets that are easy to understand and easy to answer. The facets themselves are also written by generative AI, which parses out the salient features of a case from long and complex documents. These facets can be supplemented or tweaked by the user, and the user can even provide examples to clarify edge cases. This means that Spotlight AI gives the user a lot of control over the process of first pass review and allows computers to take on the most laborious steps.
In this post, we will share some of the lessons learned in developing Spotlight AI.