Todd Itami: Hugging the Boogeyman: AI Is Not a Trap, It’s a Safety Net

Extract from Todd Itami’s article “Hugging the Boogeyman: AI Is Not a Trap, It’s a Safety Net”

Good organizational artificial intelligence (AI) policy embodies a simple rule: scale your scrutiny with the stakes.

With basic attorney training and achievable deployment, AI will reduce the risk of malpractice in 2025, not expand it. Responsible adoption hinges on a principle already deeply embedded in legal ethics and practice: diligence in verifying and securing information sources should be proportional to the significance of use. This approach applies to search engines, fact witnesses, case law databases, junior attorneys, and now, AI.

This article summarizes select criticisms, compares non-AI-tech malpractice threats, and tries to convince you that responsible deployment is not as challenging as the haters make it out to be.

The AI Malpractice Boogeyman Cometh

Let’s be clear: hallucination risks are real and often unpredictable, despite the pronounced improvements over the course of the last year. Hallucination frequency can vary depending on the specific facts of your prompt (i.e., “prompt sensitivity”) or upon unannounced model updates. And the landscape changes too rapidly for comprehensive lists of do’s and don’ts. This underscores the need to teach principles, not bright-line rules, through hands-on experience.

Read more here

ACEDS