Isha Marathe: 4 Ways Judges Can Start Tackling Deepfake Authenticity Challenges

Extract from Isha Marathe’s article “4 Ways Judges Can Start Tackling Deepfake Authenticity Challenges”

Deepfakes or AI-generated materials (AIM) have shown up in litigationelections and pop culture at a rate directly proportionate to the generative AI technology that enables it.

As fast-evolving technology brings with it the real possibility that courts will face a trickle at best, and a barrage at worst, of AI-generated audiovisual materials, a panel of experts argue that judges are at the frontline of managing this new era of evidentiary procedure.

In a paper released by the University of Chicago Legal Forum, titled “Deepfakes in Court: How Judges Can Proactively Manage Alleged AI-Generated Material in National Security Cases,” a group of legal scholars, judges and technologists lays out tips, tricks and particular obstacles for the bench as deepfakes—and allegations of deepfakes—become more prominent in courts.

Maura R. Grossman, research professor at the School of Computer Science at the University of Waterloo, and one of the eight authors of the paper, told Legaltech News that she is working with courts who feel out of their depth in the face of this technological advent.

Read more here

ACEDS