Kangaroo Court

Kangaroo Court: Understanding the Digital Challenge

Share this article

The world is seeing major advances in artificial intelligence (AI) and data analytics, and novel applications in healthcare, education, transportation, e-commerce, and defense, among other areas. These algorithms are improving the way people make decisions and the manner in which organizations function. Yet at the same time, there are concerns regarding the principles embedded within AI and whether algorithms promote human safety and respect basic values. People worry about the lack of transparency, poor accountability, unfairness, and bias in automated tools. With millions of lines of code in each application, it is difficult to know which principles are inculcated in software and how algorithms reach decisions. At the same time, the extreme growth in data itself pushes us collectively toward more and more intelligence ways of understanding this data so that we might discover, learn, and compete in an increasingly complex world.

The ubiquity of AI applications raises several ethical concerns, such as questions of bias, fairness, safety, transparency, and accountability. People worry about the possibility of discriminatory behavior based on algorithms, a lack of fairness, limited safety for humans, an absence of transparency about how the software operates, and poor accountability for AI outcomes. Without systems that address these concerns, the worry is that AI will be biased, unfair, or lacking in proper transparency.

Public anxiety over possible problems has led many nongovernment, academic, and corporate organizations to put forward declarations on the need to protect basic human rights in AI and machine learning. These groups have outlined principles for AI development and processes to safeguard humanity. Other entities are focusing on how to develop artificial general intelligence and mold it toward beneficial uses. Individuals including Sam Altman, Greg Brockman, Elon Musk, and Peter Thiel, as well as firms such as Y Research, Infosys, Microsoft, Amazon, and the Open Philanthropy Project, have joined forces to develop OpenAI as a nonprofit AI research company. It defines it mission as “discovering and enacting the path to safe artificial general intelligence.” Its engineers and scientists use open-source tools to develop AI for the benefit of the entire community and have protocols “for keeping technologies private when there are safety concerns”.

The digitization of just about everything – documents, news, music, photos, video, maps, personal updates, social networks, requests for information and responses to those requests, data from all kinds of sensors, and so on – is one of the most important phenomena of recent years. As we move deeper into the second industrial revolution, one defined by intelligent machines, digitization continues to spread and accelerate, yielding some jaw-dropping statistics. This acceleration of data presents significant challenges to both operational capacity and business intelligence, alongside very real concerns of ethical safeguards should our processes for understanding the data not progress alongside its continued growth.

An exabyte is a ridiculously big number, the equivalent of more than two hundred thousand of Watson’s entire database. However, even this is not enough to capture the magnitude of current and future digitization. Technology research firm IDC estimated that there were 2.7 zettabytes, or 2.7 sextillion bytes, of digital data in the world in 2012. Almost half as much existed in 2011. And this data won’t just sit on disk drives; it’ll also move around. In 2016 Cisco predicted that global Internet Protocol traffic will reach 1.3 zettabytes, or over 250 billion DVDs of information. In 2021, globally, IP traffic grew 3-fold from 2016-2021, a compound annual growth rate of 24%. Globally, IP traffic reached 278.1 Exabytes per month this year, up from 96.1 Exabytes per month in 2016. In 2021, the gigabyte equivalent of all movies ever made will cross Global IP networks every minute.

As these figures make clear, digitization yields truly big data. In fact, if this kind of growth keeps up for much longer, we’re going to run out of metric system. When it’s set of prefixes was expanded in 1991 at the nineteenth General Conference on Weights and Measures, the largest one was yotta, signifying one septillion, or 1025. We’re only one prefix away from that in the zettabyte era.

The recent explosion of digitization is clearly impressive. A primary force shaping the age of AI is that digitization increases understanding. It does this by making huge amounts of data readily accessible, and data are the lifeblood of science. By “science” here, we mean the work of formulating theories and hypotheses, then evaluating them. Or, less formally, guessing how something works, then checking to see if the guess is right. Digital information isn’t just the lifeblood for new kinds of science; it’s the second fundamental force (after exponential improvement) shaping the age of intelligent machines because of its role in fostering innovation.

Companies should have a formal code of ethics that lays out their principles, processes, and ways of handling ethical aspects of AI development. Some technology firms already have started to do this, enabling people inside and outside the organization to see how the company is thinking about AI ethics and what goals and objectives it has in place. Another option is for businesses to set up internal AI review boards that evaluate product development and deployment. That kind of oversight mechanism would help to integrate ethical considerations in company decision making and make sure ethicists have been consulted when important decisions are being reached.

The world is facing a series of megachanges just as AI is accelerating. There are large-scale challenges based on climate change, pandemics, income inequality, demographic shifts, geopolitics, governance, and populism, among others. For example, upticks in global temperatures are disrupting weather patterns and creating extreme events such as flooding, drought, hurricanes, and cyclones. During any time of megachange, regardless of whether it takes the form of technology innovation, climate change, pandemics, or something else, there are going to be major challenges in how AI and other emerging technologies are utilized and viewed by the general public. New approaches always have a “dual-use” character that generates both benefits and risks. The simultaneity of large forces unfolding together can obscure beneficial features and lead to anger when things go wrong.

Chip Delany on Email
Chip Delany
STRATEGY DIRECTOR AT at LINEAL SERVICES
Strategy Director at Lineal Services, previously worked as a strategist for Legal AI tech firm NexLP and before that as a consultant in continuous improvement and labor modelling. Australian National and US permanent resident.

Share this article