Kangaroo Court

Kangaroo Court: Developing Trustworthy AI

Share this article

AI ethics is a sub-field of applied ethics, focusing on the ethical issues raised by the development, deployment and use of AI. Its central concern is to identify how AI can advance or raise concerns to the good life of individuals, whether in terms of quality of life, or human autonomy and freedom necessary for a democratic society.

Ethical reflection on AI technology can serve multiple purposes:

  • It can stimulate reflection on the need to protect individuals and groups at the most basic level.
  • It can stimulate new kinds of innovations that seek to foster ethical values.
  • It can improve individual flourishing and collective wellbeing by generating prosperity, value creation and wealth maximization.

As with any powerful technology, the use of AI systems in our society raises several ethical challenges, for instance relating to their impact on people and society, decision-making capabilities, and safety. If we are increasingly going to use the assistance of or delegate decisions to AI systems, we need to make sure these systems are fair in their impact on people’s lives, that they are in line with values that should not be compromised and able to act accordingly, and that suitable accountability processes can ensure this.

Public anxiety over possible problems has led many nongovernment academic, corporate organizations to put forward declarations on the need to protect basic human rights in artificial intelligence and machine learning. These groups have outlined principles for AI development and processes to safeguard humanity. Academic experts have pinpointed particular areas of concern and ways both government and business need to promote ethical considerations in AI development.

Other entities are focusing on how to develop artificial general intelligence and mold it toward beneficial uses. Individuals including Sam Altman, Greg Brockman, Elon Musk, and Peter Thiel, as well as firms such as Y Research, Infosys, Microsoft, Amazon, and the Open Philanthropy Project, have joined forced to develop OpenAI as a nonprofit AI research company. It defines its mission as “discovering and enacting the path to safe artificial general intelligence.” Its engineers and scientists use open-source tools to develop AI for the benefit of the entire community and have protocols “for keeping technologies private when there are safety concerns.”

As a sign of their organizational commitment, several companies have joined together to form the Partnership for Artificial Intelligence to Benefit People and Society. They include Google, Microsoft, Amazon, Facebook, Apple, and IBM. The group seeks to develop industry best practices to guide AI development, with the goal of promoting “ethics, fairness and inclusivity; transparency, privacy, and interoperability; collaboration between people and AI systems; and the trustworthiness, reliability and robustness of the technology.”

Understanding AI Bias

AI bias is an anomaly in the output of machine learning algorithms. These could be due to the prejudiced assumptions made during the algorithm development process or prejudices in the training data. Data that AI systems use as input can have built-in biases, despite the best efforts of AI programmers. The indirect influence of bias is present in plenty of types of data. For instance, evaluations of creditworthiness are determined by factors including employment history and prior access to credit – two areas in which race has a major impact.

Biases can also be created within AI systems and then become amplified as the algorithms evolve.AI algorithms are not static. They learn and change over time. Initially, an algorithm might make decisions using only a relatively simple set of calculations based on a small number of data sources. As the system gains experience, it can broaden the amount and variety of data it uses as input and subject those data to increasingly sophisticated processing. An algorithm can end up being much more complex than when it was initially deployed. Notably, these changes are not due to human intervention to modify the code, but rather to automotive modifications made by the machine to its own behavior. In some cases, this evolution can introduce bias. The power of AI to invent algorithms far more complex than humans could create is one of its greatest assets – and, when it comes to identifying and addressing the sources and consequences of algorithmically generated bias, one of its greatest challenges.

Many of the challenges involved in assessing the ethics of AI and ML deal with their “dual-use” nature. By that, I mean that most algorithms are software applications can be used either for good or for ill. As an illustration, facial recognition can be deployed to find lost children or facilitate widespread civilian surveillance. It is not so much the technology that dictates the moral dilemma, but rather the human use case involved with the particular applications. The very same algorithm can serve a variety of purposes, which makes it challenging to resolve the ethical impact.

How AI is being deployed represents an interesting opportunity to explore AI-ethics because particular uses illustrate concrete implementation dilemmas. Having in-depth knowledge of those applications is important for assessing the extent to which AI raises human safety or bias issues. There are several steps which organizations can take toward fixing bias in AI systems. Some examples include:

  • Fully understanding the algorithm and data to assess where the risk of unfairness is high.
  • Establishing a debiasing strategy that contains a portfolio of technical, operational, and organizational actions:
    • Technical strategy involves tools that can help you identify potential sources of bias and reveal the traits in the data that affects the accuracy of the model.
    • Operational strategies included improving data collection processes using internal “red teams” and third-party auditors. You can find more practices from Google AI’s research on fairness.
    • Organizational strategy includes establishing a workplace where metrics and processes are transparently presented.
  • As teams identify biases in training data, they should consider how human-driven processes might be improved.Model building and evaluation can highlight biases that have gone noticed for a long time. In the process of building AI models, companies can identify these biases and use this knowledge to understand the reasons for bias. Through training, process design and cultural changes, companies can improve the actual process to reduce bias.
  • Define clear use cases where automated decision making should be preferred and when humans should be involved.
  • Research and development are key to minimizing the bias in data sets and algorithms. Eliminating bias is a multidisciplinary strategy that consists of ethicists, social scientists, and experts who best understand the nuances of each application area in the process. Therefore, companies should seek to include such experts in their AI projects.
  • Diversity in the AI community eases the identification of biases. People that first notice bias issues are mostly users who are from that specific minority community. Therefore, maintaining a diverse AI team can help you mitigate unwanted AI biases.

One of the challenges of AI is its basic opaqueness. To help with these kinds of ethical problems, software designers should annotate their AI and have AI audit trails that explain how particular algorithms are put together or what kinds of choices are made during the development process. Annotated notes provide “after-the-fact” transparency and explainability to outside parties. The notes help make sense of the concrete decisions that were made and the manner in which the software embedded particular values. Such tools would be especially relevant in cases that end up under litigation and need to be elucidated to judges or juries in the event of consumer harm. Neither national nor state legislators have passed many bills laying out the rules of the road for AI development.

At this point there have been few legislative efforts to regulate AI or impose restrictions. The absence of clear policies means that harms will be adjudicated through the legal system. As algorithms make discriminatory decisions or generate adverse consequences for individuals, people will sue to regress their grievances. At the current time, when there are few legal precedents, judges will use product liability laws to evaluate harm and impose penalties in cases of demonstrable damages. In judicial proceedings, having audit trails that explain coding decision will bring needed transparency to the deliberations.

Learn more about ACEDS e-discovery training and certification.

Chip Delany on Email
Chip Delany
STRATEGY DIRECTOR AT at LINEAL SERVICES
Strategy Director at Lineal Services, previously worked as a strategist for Legal AI tech firm NexLP and before that as a consultant in continuous improvement and labor modelling. Australian National and US permanent resident.

Share this article