Extract from Kelly A. Lavelle’s article “Discovery Risks of ChatGPT and Other AI Platforms”
OpenAI CEO Sam Altman recently warned that ChatGPT conversations are not legally protected and can be used as evidence in court. Speaking on a podcast, Altman acknowledged that OpenAI is legally required to retain user chats, including deleted ones, due to a current court order discussed later in this article. Comparing AI conversations to those with doctors, lawyers, or therapists, Altman argued that similar confidentiality protections should exist but currently do not, leaving sensitive exchanges with public AI tools fully exposed to discovery, an issue he described as needing to be addressed with urgency.
The use of AI tools like ChatGPT and Claude has created new issues for the discovery process. Lawyers must recognize that AI queries and outputs may qualify as electronically stored information (ESI) under both federal and state discovery rules. As AI technology becomes more integrated into legal practices, discovery requests are beginning to target the use of these technologies, seeking access to AI-generated documents, search histories and communication logs.
Many users may view AI tools as private assistants rather than potential witnesses. However, the use of AI tools like ChatGPT and Claude can inadvertently expose sensitive information, including legal strategies and privileged facts. Users may not realize that third-party AI platforms can be compelled to produce records during litigation, potentially compromising confidentiality and privilege.