Extract from Jon Fowler’s article “Is AI-Generated Information Likely to Start Showing Up in Legal Reviews?”
AI has permeated various aspects of the legal field, from streamlining eDiscovery to supporting legal research. As AI technology continues to evolve, its ability to produce human-quality text is becoming increasingly refined. This inevitably raises the prospect of AI-generated information (AGI) entering legal reviews. In response to this emerging reality and the absence of formal regulation, lawyers and eDiscovery professionals must ensure they are prepared and develop effective strategies for identifying, validating, and integrating AI-generated content into the legal review and eDiscovery process.
Validation and Identification
As AI-generated content becomes increasingly prevalent, lawyers must be able to validate and identify it to ensure it is dealt with appropriately, for example, by measuring its authenticity and accuracy. AI-generated content can harbour the potential for misinformation due to its reliance on training data. If this data is biased or incomplete, AI systems may replicate these flaws, leading to inaccuracies, inconsistencies, and even outright falsehoods in their output. Somebody can intentionally instruct AI systems to produce false or misleading information, further exacerbating the issue. This can pose a significant risk to the integrity of legal proceedings and our trust in the legal system.
Lawyers may be able to locate AI-generated content by examining writing style, contextual relevance, and factual accuracy or by using AI detection tools, but these fields are nascent and will take time to develop.