Case of the week legal technology

The Risks of Generative AI in Legal Research: Lessons from the First Discovery Case Involving ChatGPT Citations

Share this article

The integration of generative AI tools like ChatGPT has sparked both excitement and concern within the legal community. While AI offers tremendous potential in automating routine tasks, its misuse can lead to severe consequences, as evidenced by a recent discovery decision that has captured the attention of litigators and legal professionals alike. This case, Iovino v. Michael Stapleton Assocs., Ltd. (MSA), marks the first instance in which a court addressed the implications of using ChatGPT-generated citations in a discovery motion..

In this blog, we’ll delve into the case details, the court’s ruling, and the broader implications of relying on generative AI for legal research. The lessons learned here are crucial for all litigation professionals who must navigate the intersection of cutting-edge technology and legal ethics.

Background

The case of Iovino v. Michael Stapleton Assocs., Ltd. emerged from a whistleblower lawsuit in which the plaintiff, Dr. Iovino, alleged that her termination was retaliatory, stemming from her reports to the State Department’s Office of the Inspector General regarding MSA’s contract violations. The case, heard by U.S. District Judge Thomas Cullen in the Western District of Virginia, became contentious, particularly in the discovery phase.

The specific discovery dispute involved Iovino’s request to depose six MSA employees under Rule 30(b)(6), a request MSA opposed by invoking the Touhy regulations—federal rules that govern the disclosure of information by federal employees in legal proceedings. The Magistrate Judge granted MSA’s protective order, applying the Touhy regulations, a decision Iovino challenged.

The plaintiff’s challenge took an unexpected turn when her legal team submitted a brief with citations that the court discovered were entirely fabricated, likely generated by ChatGPT. This case now stands as a cautionary tale about the perils of using generative AI in legal research without proper verification.

Court’s Analysis and Ruling

Judge Cullen’s analysis of the case hinged on the validity of the protective order, with the Court ultimately affirming that the Touhy regulations did indeed apply to the deposition requests. But the more significant issue arose when the court scrutinized the plaintiff’s brief, uncovering citations to cases that simply did not exist.

The court noted that Iovino’s legal team cited multiple non-existent cases and attributed quotes to actual cases that were nowhere to be found in the decisions themselves. MSA highlighted this issue, suggesting that the fabricated citations were the result of “ChatGPT run amok.”

The implications of this misconduct were severe. The court issued an order for Iovino’s counsel to show cause why they should not face sanctions under Rule 11 for submitting frivolous arguments and potentially attempting to deceive the court. The court’s directive serves as a stark warning to all legal professionals about the ethical responsibilities inherent in legal research and the use of technology.

Implications for Litigators

The Iovino case is not an isolated incident; it echoes previous instances where reliance on AI-generated content in legal filings led to serious ethical violations. One of the most notable cases is Mata v. Avianca, Inc., where a lawyer submitted fake judicial opinions generated by ChatGPT, leading to similar scrutiny and disciplinary action.

These cases underscore a critical point: Generative AI, while powerful, is not a substitute for traditional legal research tools. It lacks the reliability and verification mechanisms essential for producing accurate legal citations. The use of such technology without thorough validation can result in wasted time, increased costs, damage to the reputations of the parties involved, and, most importantly, potential harm to the integrity of the legal system.

  1. Verify All Sources: Before submitting any legal document, ensure that all citations are to legitimate sources and that the cases stand for the propositions for which they are cited. Never assume that AI-generated content is accurate without cross-checking against authoritative legal databases.
  2. Understand the Limitations of AI: While AI can assist in various aspects of legal work, including drafting and summarizing documents, it is not infallible. AI-generated content should be viewed as a starting point that requires human oversight and verification.
  3. Ethical Considerations: The use of AI in legal practice introduces new ethical challenges. Lawyers must be vigilant in maintaining the standards of accuracy and integrity that the profession demands. Misuse of AI tools can lead to disciplinary actions, including sanctions and potential disbarment.
  4. Educate and Share: Legal professionals should actively share these cautionary tales within their networks, including bar associations and academic institutions, to prevent others from making similar mistakes. Awareness is the first step in mitigating the risks associated with AI in legal practice.

Conclusion

The Iovino v. Michael Stapleton Assocs., Ltd. case serves as a powerful reminder of the ethical responsibilities that come with the use of emerging technologies in the legal field. As generative AI becomes more prevalent, it is imperative that litigators and legal professionals exercise caution, ensuring that their reliance on such tools does not compromise the accuracy and integrity of their work. The future of legal practice may be bright with technological advancements, but it will always require the careful stewardship of those who practice law.

Kelly Twigger on EmailKelly Twigger on Linkedin
Kelly Twigger
Kelly Twigger is a practicing attorney, software developer, consultant, writer, and speaker on issues in electronic discovery, the development and implementation of legal technology, and how to effectively use data in planning for and during litigation.

She is a co-author of Electronic Discovery and Records and Information Management, and host of Case of the Week at eDiscovery Assistant. As Principal at ESI Attorneys, Kelly manages the boutique eDiscovery and information law firm that acts as operational business partners with its clients to advise law firms, corporations, and municipalities on all areas of electronic information including eDiscovery, privacy, cybersecurity, and information governance.

Kelly is also the CEO of eDiscovery Assistant — a SaaS-based practical resource for litigators handling eDiscovery — that curates discovery decisions, rules, and additional content. She is developing an online academy to provide on-demand education for lawyers and legal support professionals to stay abreast of changes in the law and technology that affect litigation and clients’ obligations to respond.

You can reach Kelly at [email protected], join her Facebook community group at Let’s Talk eDiscovery, or connect with her on Twitter @kellytwigger.

Share this article