Sam Bock, Relativity: Deepfakes in E-Discovery: The Scope and Solution of an Emerging Problem

relativity logo

Extract from Sam Bock’s article “Deepfakes in e-Discovery: The Scope and Solution of an Emerging Problem”

Litigation teams’ concerns about generative AI aren’t limited to hallucinations or the differences between human and machine reviews. Alarm bells are ringing at the rising challenge of deepfake evidence: AI-generated data that is fabricated to create “proof” of a lie and is extremely difficult to discern as artificial with the naked eye.

Popular headlines about deepfakes often focus on areas like defamation, intellectual property theft, and fraudulent impersonation. But the risk of deepfake evidence emerging in high-stakes litigation feels increasingly real, and legal experts are actively discussing it.

Claims of deepfake legal evidence are already surfacing in U.S. courts (see: United States v. Guy Wesley ReffittUnited States v. Anthony WilliamsUnited States v. Joshua Christopher Doolin, and Sz Huang et al v. Tesla, Inc. et al). Concerns about deepfakes feel especially acute in this moment, when audio and video evidence are on the rise in litigation. (At Relativity, we ran the numbers and found a 40 percent year-over-year growth in audio files and a 130 percent growth in video files in RelativityOne.)

Read more here

ACEDS