
Extract from Benjamin Sexton’s article “Generative AI Document Review: Experts Compare Notes”
Document review consumes roughly 80 percent of e-discovery spend, according to RAND Corporation research. As seasoned litigators know, cases can settle over the prospect of review costs alone. Generative AI tools like Relativity aiR continue in the lineage of technology-assisted review (TAR) to reduce the cost and burden of review while increasing the transparency and accuracy of the process. But successful deployment requires more than flipping a switch.
A recent expert panel chat brought together practitioners from JND, Bayer, and Relativity to discuss practical workflows for generative-AI-based document review. Their collective experience reveals a process that shares DNA with traditional TAR but demands different discipline at key stages.
The first step of conducting document review with generative AI, defining what documents require review, resembles traditional approaches. In most cases, practitioners still collect, apply date restrictions, run search terms, and apply other parameters to compress the population before review begins.
The difference lies in timing. With generative AI review, finalizing the population early matters more than it did with linear review or even continuous active learning (CAL).