
Extract from Ingrid van de Pol-Mensing’s article “What is defensible AI in legal practice?”
For lawyers, defensible AI means more than simply providing an answer when asked about AI. It means being able to demonstrate that the process behind the use of AI was reliable, transparent, and compliant. A defensible decision is one that can withstand an audit and maintain clients’ trust.
This way of working has required lawyers to be thorough for ages—even long before AI entered the picture. Legal practice has always depended on maintaining clear evidence structures, citing sources meticulously, and being prepared to verbally defend one’s reasoning and conclusions. AI has not changed these expectations; it has simply introduced new tools that must fit within this longstanding culture of rigor.
For lawyers, this requires explainability, documented audit trails, secure handling of confidential data, and human oversight.
The promise of AI is undeniable and the legal industry has taken note. In recent research from Ari Kaplan Advisors, 81% of partners and senior lawyers say that AI will be essential to stay ahead in the coming year. And, with faster reviews, lower costs, and shorter turnaround times, it’s easy to see why. But focusing only on speed means missing the bigger picture. In legal practice, a tool that produces unreliable or opaque results is not a productivity gain—it’s a risk.