Extract from Jerry Bui’s article “AI-Generated Content, Deepfakes and New Data Push the Limits of Civil Procedure”
The line separating reality from artificiality is more blurred than ever. The development of machine learning and artificial intelligence has produced several exciting but troubling phenomena. Specifically, that machines are currently producing completely new forms of content and data that may be leveraged to produce sophisticated deepfakes that strikingly resemble actual people.
Until very recently, the e-discovery process of finding, preserving and analyzing electronic information to uncover facts was centered on two broad types of information: user-generated (email, documents, chat messages, etc.) and systems generated (operating systems data, application logs, etc.). Long-established rules and processes exist for how these groups of electronic information are filtered and handled during discovery. These ensure that discovery, review and document productions are focused only on the individuals, timeframes and activities relevant to the matter at hand. That said, the emerging data sources — such as Slack, WhatsApp, Microsoft Teams and other cloud sources — that have come into this realm have created numerous challenges across evidence preservation, collection, analysis, review and production.
In addition to rapidly evolving complexities with emerging data, what about the wholly new data that’s not user or system generated, but rather AI generated, either entirely by algorithms or in tandem with human input? How will traditional tools and techniques need to adapt to handle new data challenges that have never been encountered by digital forensics specialists or lawyers?