Kangaroo Court_Blog

Kangaroo Court: The Impact of Bias in Facial Recognition Technologies

Share this article

Artificial Intelligence (AI) bias is an anomaly in the output of machine learning (ML) algorithms. These could be due to the prejudiced assumptions made during the algorithm development process or prejudices in the training data. Avoiding algorithmic bias is an essential step toward ensuring public confidence and trust in AI systems. Facial recognition technology has been a central feature of debates surrounding the use of AI by the state. The reason for this is obvious: without public trust it will be difficult for democratically elected governments to achieve widespread adoption of AI solutions. The implementation of these advanced surveillance architectures by foreign governments like the Chinese Communist Party (CCP) has revealed an almost Orwellian power where Big Brother can monitor a member of the publics every move and allocate social scores based on their activity. In order to avoid this nightmare scenario and maintain the trust of its citizenry, western governments must grow public confidence through policy that combines transparency with education.

There are two core types of AI bias: cognitive bias and insufficient data. Cognitive biases are effectively feelings towards a person, or a group based on their perceived group membership. More than 180 human biases have been defined and classified by psychologists, and each can affect individuals as they make decisions. These biases could then seep into ML algorithms through designers unknowingly introducing them to the model, or through a training data set which includes biases within it. If the data itself is not complete, it may not be representative and therefore it may include bias. For example, most psychology research studies include results from undergraduate students which are a specific group and do not represent the whole population.

Once the stuff of sci-fi novelty, facial recognition software is commonplace within our daily activities. Smartphones use the technology as a basic security feature, airports leverage facial recognition technology in customs and border protection, and social media tools leverage image recognition through analyzing your network of friends and then using that data to suggest you tag these friends in the images you have shared. However, for the CCP, personal security is far from their main concern. It is critical that in countries like the United States we educate the public and develop meaningful policy which empowers people to make decisions for themselves. There is a real danger surrounding this technology being used for nefarious purposes, however that can only truly be maximized when the public is unaware of the risks. As an example, the CCP and the companies it supports have become leading innovators in information control and minority suppression via technology. This network stretches from Xinjiang to Beijing, leveraging every asset it has in regulating the daily lives and beliefs of its people.

In early June, China’s largest Tech company, Tencent, will begin using facial recognition technology to keep children from playing video games after 10 p.m. to enforce strict Chinese gaming regulations. This move represents the latest in China’s authoritarian control over the most minute aspects of their citizen’s lives, as well as a new extension of the controversial facial recognition technology that is already pervasive throughout the country. It is estimated that there are approximately 626 million facial recognition cameras in the country. That’s approximately one camera for every 2 people in the most populated country on earth. In cities like the nation’s capital of Beijing, these cameras constantly gather data off the street and send that information to be combined with intel gathered by police surveillance. Citizens are then indexed by this intel along with any criminal history, ethnic facial features, and even whether they are seen wearing a mask. Facial recognition software is nothing new in China. Its use has been steadily expanding since 2015, when the Chinese Ministry of Public Security called for the creation of an “omnipresent, completely connected, always on and fully controllable” nationwide video-surveillance network as a public-safety imperative.

Avoiding this nightmare scenario within the United States requires more than adult education and some public messaging campaigns across social media. Both public and private institutions should invest resources toward developing a framework for trustworthy AI. This technology is one of the most transformative forces of our time, and is bound to alter the fabric of society. It presents a great opportunity to increase prosperity and growth, which the US must strive to achieve. Given that, on the whole, the benefits of narrow (as opposed to general) AI outweigh its risks, we must ensure to follow a road that maximizes the benefits of AI while minimizing the risks. Trustworthy AI has two components: (1) it should respect fundamental rights, applicable regulation and core principles and values, ensuring an “ethical purpose” and (2) it should be technically robust and reliable since, even with good intentions, a lack of technological mastery can cause unintentional harm.

An example of AI bias gone wrong was discovered by Amazon, when the tech giant found critical problems with its AI driven recruiting tool. Amazon started the project in 2014 with the dream of automating the recruitment process. The project was based on reviewing the applicant’s resumes and rating each applicant using algorithms in order to save recruiters time by reducing the overall number of applicants to a digestible amount for human eyes. However, in 2015, Amazon realized that the algorithm was demonstrating bias against women. By utilizing historical data from the past 10-years of recruitment to train the model, Amazon unknowingly was training the model with data that was bias against women, simply due to the dominance of men within the tech industry. The algorithm would penalize resume’s that included the word “women’s”, as in “women’s chess club captain.” Once identified, the algorithm was scrapped.

I see nothing nefarious in Amazon’s mishap. Like all efforts at innovation, this was a project seeking to reduce the repetition from a cost-center of the organization and maximize the speedy identification and hiring of quality candidates. By identifying the issue and scraping the algorithm, Amazon acted in the best way possible and will likely produce a far superior solution in the coming years. These are the growing pains of innovation at work. What matters is the context and intention surrounding the innovation, and through the process learning where bias exists and how to identify its various forms. Yet the realization of this error is far from enough to gain public support, let alone trust. There already exists a significant issue around trust in big tech. Regular people have a hard time swallowing any notion that tech billionaires truly care about their lives. Changing the narrative requires a fundamental shift in education. AI as a concept should be introduced into the middle and high-school curriculum as a standard feature of math, ethics, history, and science. A significant increase in public education at a foundational level will be the surest method to building a society which truly understands what AI is and how its risks can impact lives.

Learn more about ACEDS ediscovery training and certification, and subscribe to the ACEDS blog for weekly updates.

Chip Delany on Email
Chip Delany
STRATEGY DIRECTOR AT at LINEAL SERVICES
Strategy Director at Lineal Services, previously worked as a strategist for Legal AI tech firm NexLP and before that as a consultant in continuous improvement and labor modelling. Australian National and US permanent resident.

Share this article