Igor Labutov, Laer AI: Your LLM-Based Document Review Is Probably Not Defensible

laer ai logo

Extract from Igor Labutov’s article “Your LLM-Based Document Review Is Probably Not Defensible”

Defensibility of the defensibility process

You can make anything defensible as long as you have a defensible process for establishing defensibility. Let’s say you were going to bet your life on the predictions a psychic would make. You’d probably want to check if that psychic is legit first. An obvious way to do that is to ask them questions, answers to which you already know, and see how accurate they are.

If we were to translate this into legalese, how you perform this testing will determine if your testing methodology is defensible. If the test of your psychic told you that they were accurate in their predictions, but once you walked out the door, all their other predictions didn’t hold up – you messed up somewhere in your testing workflow, i.e., it probably wasn’t defensible. If, on the other hand, the psychic’s predictions of the future were just as accurate as their predictions on your test data, then the workflow you used probably was defensible.

Can LLMs in document review be defensible?

The short answer is Yes. If psychics can be made defensible, so can LLMs. So why are most applications of LLMs today not defensible? It has to do with how it’s used by people.

Read more here

ACEDS