Joining the debate
As AI gains a foothold in law, naysayers are shouting that AI can and should never replace a seasoned attorney, and proponents are shouting that AI can replace the majority of legal functions and solve the access to justice problem.
AI was originally sold as a way to supplant the highly paid attorneys. Specialized attorneys who create solutions to complex problems apply cases, laws and regulations to a particular circumstance or fact pattern can bill at $150-1500/hour. Now, the selling strategy is to offer to replace paralegals and fledgling associates who bill at $30-250/hour, rather than replace the attorneys who might purchase such a system.
David Greetham, an eDiscovery Business Unit leader and patent holder for Ricoh USA, has a different moniker for AI. Greetham believes “the attorneys who embrace AI and Intelligent Support Technology [IST] will powerfully position themselves for success in future law.”
Kelly Twigger, Principal of ESI Attorneys, points to Susan Wojcicki, CEO of YouTube, who confirmed that YouTube will increase the number of people working to oversee content to more than 10,000 next year. “Human reviewers remain essential to both removing content and training machine learning systems because human judgment is critical to making contextualized decisions on content,” she said in a 2017 blog post.
Much ado about AI
In legal, software platforms and tools routinely use AI or IST to classify and categorize photos, improve upon Optical Character Recognition and to create indices of sounds. Document review and production is augmented and organized by algorithms that find near duplicates, clusters of related documents, timelines and relationship graphs. Documents are created using decision trees and document assembly.
However, it is in the synthesis of input and the creation of alternatives that AI/IST will augment a smaller and smaller number of human attorneys. In besting the human champion of Go, AI proved that it could handle an infinite number of permutations. In late 2017, DeepMind’s AlphaGo Zero, armed only with a skeleton of information and rules and the computing power to play games against itself, became the champion within a month of unsupervised learning.
As AI algorithms are deployed to determine employment, custody, sentencing, immigration and other fundamental decisions, it is important to be able to deconstruct the inputs algorithmic structures, included datasets and quality control. For example, a visual AI was able to distinguish a white hand and not a black hand. It is not hard to imagine an algorithm using past case data and current laws to calcify social change and development if left to its own devices. It is also not difficult to imagine an AI optimizing conditions for its survival over other humans or machines.
It is time for the AI/IST community to enhance the Three Laws of Robotics from Isaac Asimov toward a core ethic for artificial intelligence:
“Artificial intelligence may not injure a human being or, through inaction, allow a human being to come to harm. Artificial intelligence must obey orders given it by human beings except where such orders would conflict with the First Law. Artificial intelligence must protect its own existence as long as such protection does not conflict with the First or Second Law.”