Extract from Joey Seeber’s article “The Impact of AI Tools on Intent and What Can Go Wrong: Navigating the Promise and Peril of Generative AI in Legal Practice”
Generative AI (gen AI) tools have moved beyond hype, becoming embedded in workflows and providing value in real-world applications. However, they also present substantial risks for legal professionals, whose work often involves complex cognitive tasks that shouldn’t be handed over to AI without human oversight.
Gen AI tools can blur the lines between users’ original intent and machine-generated suggestions, leading to misconstrued or misinterpreted information and errors in task management. When legal professionals rely exclusively on gen AI tools to perform complex cognitive tasks without human oversight and domain expertise, they expose their organizations to unnecessary liability and potential litigation.
As these AI tools evolve to anticipate legal professionals’ needs, the question becomes: Who should ultimately control intent, and what could go wrong if that control is misplaced?
Gen AI’s Ability to Infer Intent
Gen AI tools and their use cases have moved well beyond a nascent stage within the legal industry. A recent study showed that 26% of legal professionals use gen AI, while 74% of law firm professionals use it for legal research and other tasks.