Extract from Cara Peterman, Courtney Quirós, Charlotte Bohn’s article “Can Opposing Parties See Your AI Prompts? Discovery Challenges in the AI Era”
Imagine this scenario. You are an in-house lawyer juggling a full inbox, back-to-back meetings and a growing list of demands. Your company is also in active litigation, being managed by your in-house legal team. A calendar reminder appears for a related discovery response deadline that you are tasked with handling.
To save time, you send the requests to different employees outside the legal department and ask them to fill in factual information related to the business. Without your knowledge, they copy several requests into a public-facing artificial intelligence (AI) tool. Your company has its own proprietary, internal AI tools, but many employees either don’t know about the company’s tools, or prefer to use external platforms. So, attempting to be helpful, they craft a prompt along the lines of: “Draft responses that sound professional and minimize risk using the documents provided.” They feed the tool some select background facts, an internal timeline, and notes from discussions with your company’s business leaders. The AI generates clean, polished discovery responses, which are returned to you and ultimately incorporated into the version you serve on the opposing party.
You think nothing of this until, a few weeks later, you are staring at an interrogatory your company received in the same matter that asks: What AI tools did the company use to prepare its discovery responses? Provide all prompts, inputs, outputs, and chat histories related to any AI-assisted drafting of discovery materials. Suddenly, what felt like an efficient delegation decision has become a potential evidentiary minefield.
As AI becomes embedded in daily legal and business workflows, scenarios like this are no longer purely hypothetical. As organizations and employees (including, increasingly, legal departments) expand their use of generative AI tools, they may unwittingly be creating a new and largely unexamined category of potentially discoverable information.