Kangaroo Court

Kangaroo Court: AI Deployment Needs a Smokey the Bear

Share this article

Artificial Intelligence (AI) can be a difficult concept to grasp. In 2018 the Pew Research Center released the results of a 2017 study that found only 17 percent of 1,500 U.S. business leaders claimed some familiarity about how AI would affect their organization. Executives understand the considerable potential but were not clear on how AI would be deployed within the enterprise or how it would alter their industries.

Within the legal and compliance sphere the critical metric is risk mitigation. Even if executives understood there was incredible value to gain, sensitivity around the use of enterprise or customer data in technologies like machine learning (ML) would create more questions than the executive team had answers. The increase in efficiency and effectiveness on one task is only as valuable as the safeguards in place to prevent data leakage or the ramifications of model bias. Understanding the how and why of a result is just as important as efficiency gained, if not more. To satisfy the role of legal and compliance, executive leadership needs to ensure AI initiatives are introduced alongside a ‘guide’ that informs the end user about the results of their decision before it is made.

AI offers several advantages in terms of personal convenience, operational efficiency, and decision making. Yet there are concerns about whether AI solutions will invade personal privacy and increase inequality. AI developments have raised several worries regarding the detrimental consequences of technology innovation for jobs and personal privacy. There are concerns end users will lose the ability to control the rapid pace in which these tools analyze and interpret data, resulting in unintended consequences from model bias and data mismanagement. The fears associated with these problems can drive people to disengage from the value these tools can provide to protect themselves.

The ubiquity of AI applications raises several ethical concerns, such as questions of bias, fairness, safety, transparency, and accountability. People worry about the possibility of discriminatory behavior based on algorithms, a lack of fairness, limited safety for humans, an absence of transparency about how the software operates, and poor accountability for AI outcomes. Without systems that address these concerns, the worry is that AI will be biased, unfair, or lacking in proper transparency. Public anxiety over possible problems has led many non-government, academic, and corporate organizations to put forward declarations on the need to protect basic human rights in AI and ML. These groups have outlined principles for AI development and processes to safeguard humanity. Naturally, the legal teams within organizations are sensitive to even the slightest risk. When there is not enough understanding and consensus about how these systems should be managed, AI projects can roar to a halt.

Legal adversity to risk is a significant challenge to realizing the value of AI across the enterprise. Opportunities to address this fundamental problem require senior leadership to proactively plan for two key initiatives that will work alongside legal to ensure large scale initiatives can go ahead. The first is education. The best practices and knowledge core around AI can neatly find itself a home within a program of professional development. This is like the data security training which employees already participate in. AI education can neatly fit into a HR lead initiative that seeks to remove the opaque nature of the technologies and reduce employee anxiety. The second is a digital guide that provides end-users with a description of the result of actions taken when using AI solutions, such as ML. Think of this as a digital Smokey the Bear helping you to prevent forest fires within your AI ecosystem.

The idea is simple. When coding documents and building a model, the decision to code is based upon the guide informing the user exactly how their decision will impact the rest of the population. When working in teams, this can be an incredibly valuable method for managing the ML process. The outcome is a defensible process and a relieved group of employees that can feel confident in their actions, especially when handling sensitive data. Companies should also have a formal code of ethics that lays out their principles, processes, and ways of handling ethical aspects of AI development. Some technology firms have already started to do this, enabling people inside and outside the organizations to see how the company is thinking about AI ethics and what goals and objectives it has in place.

Another option is for businesses to set up internal AI review boards that evaluate product development and deployment. Detailing that their process for handling organizational data includes stop-checks to ensure the outcomes are fully intended is a valuable way to easily understand and report on the actions taken with data. This serves to provide the organization with an added layer of defensibility in the use of ML solutions. That kind of oversight mechanism would help to integrate ethical considerations in company decision making and make sure ethicists have been consulted when important decision are being reached.

The world is facing a series of megachanges just as AI is accelerating. There are large-scale challenges based on climate change, pandemics, income inequality, demographic shifts, geopolitics, governance, and populism, among others. The large scale of these developments and the speed at which they are moving creates considerable uncertainty in people’s minds about our ability to handle change and address adverse consequences. If income inequality creates societal dysfunction, will it be possible for AI to unfold without being considered part of the problem? Explainable AI is a pathway to avoiding organizational bottlenecks, public backlash, and several other unintended outcomes.

Experts have argued there needs to be avenues for humans to exercise oversight and control of AI systems. There must be mechanisms to avoid unfairness, bias, and discrimination. In addition, companies and government agencies must understand there are audit risks, third-party assessments, and costs for noncompliance with existing laws and cherished human values. Education is critical to this effort, but human error is a real thing and further stop-gaps are required. Introducing AI through a mechanism designed to guides and reports on the user-generated decisions within AI systems will help organizations reduce legal risks and maximize value faster.

Learn more about ACEDS e-discovery training and certification, and subscribe to the ACEDS blog for weekly updates.

Chip Delany on Email
Chip Delany
STRATEGY DIRECTOR AT at LINEAL SERVICES
Strategy Director at Lineal Services, previously worked as a strategist for Legal AI tech firm NexLP and before that as a consultant in continuous improvement and labor modelling. Australian National and US permanent resident.

Share this article