Extract from Benjamin Joyner’s article “Agentic Systems Add New Layer of AI Hallucination Risk in Legal Work”
The legal industry has become uncomfortably familiar with the concept of generative artificial intelligence hallucinations, as the lawyers sanctioned for filing court documents with fake or inaccurate citations can attest.
With a new generation of agentic tools rapidly entering the market, lawyers and support staff will also need to guard against even more kinds of potential misfires.
Agentic AI tools can magnify the kinds of hallucinations that can appear with gen AI tools, enabling unchecked errors to compound over the course of a multistep workflow.
But agentic tools also present novel forms of hallucinations, with errors appearing not just in the information a tool produces, but also in the very process the tool uses to complete a task.
Rok Popov Ledinski, founder and CEO of MPL Legal Tech Advisors, told Law.com that these new kinds of potential errors derive in part from the way agentic tools are built and how they operate. AI agents can create or interact with other agents that they task with various components of a user’s request, creating opportunities for hallucinations in both agent-to-human and agent-to-agent communications.