Kangaroo Court_Blog

Kangaroo Court: Accountability Equals Equity

Share this article

If artificial intelligence keeps improving, automating ever more jobs, what will happen? Many people are job optimists, arguing that the automated jobs will be replaced by new ones that are even better. After all, that is what has always happened before, ever since Luddites worried about the technological unemployment during the Industrial Revolution. Others, however, are job pessimists and argue that this time is different, and that an ever-larger number of people will become not only unemployed, but unemployable. The job pessimists argue that the free-market sets salaries based on supply and demand, and that a growing supply of cheap machine labor will eventually depress human salaries far below the cost of living. Since the market salary for a job is the hourly cost of whoever or whatever will perform it most cheaply, salaries have historically dropped whenever it became possible to outsource a particular occupation to a lower-income country or to a cheap machine.

To overcome this problem, the model employee of tomorrow will require a substantial grasp on the economics of data. At a base level, they must understand the key types of AI and how these tools can be developed and deployed to solve tomorrow’s challenges, regardless of function or industry. This is anomalous to developing typing skills within the millennials and gen X. Boomers were exposed to personal computers and the internet during the foundational period of their careers. Indeed, these tools we take for granted occupied the space that AI does for the millennial of today. Like the boomer in the early thirties learning how to type 120 words per minute and navigate the initial windows operating systems, millennials must look at AI as an opportunity to increase their relevance for the fast-approaching world of tomorrow.

The point here is not to learn coding or understand the math behind deep learning algorithms (although that would not hurt). The point is to approach AI as an opportunity and improve understanding through explainability. Understanding the conceptual soundness of any model, tool, application, or system aids in managing its risks including those related to lack of explainability. The importance of conceptual soundness is described in regulatory agency guidance and is well established in legal practice. For traditional approaches, conceptual soundness is foundational both to development and validation/independent review. In the case of certain less transparent AI approaches, however, evaluations of conceptual soundness can be complicated.

Legal industry professionals looking to grow their career should take the AI challenge head-on, engaging their leadership and seeking accountability through owning or at the very least becoming part of any AI initiative or industry discussion. As a starting point, employees should independently seek to understand the building blocks of AI: natural language processing, computer vision, speech processing, machine translation, pattern recognition, knowledge engineering, robotics, planning, machine learning, and PLO (planning, scheduling, optimization). As AI has become a more significant driver of economic activity, there has been increased interest from people who want to understand it and gain the necessary qualifications to work in the field. At the same time, rising AI demands from industry are tempting more professors to leave academia for the private sector.

The underlying theory and logic to enhance explainability of AI systems may be less accessible to users than that of traditional approaches or more transparent AI approaches. Without insight into an approach’s general operating principles, executive and departmental leadership may not be able to evaluate with confidence how the system will function in unforeseen circumstances. To address lack of explainability of certain AI approaches, researchers have developed techniques to help explain predictions or categorizations. These techniques are often referred to as “post-hoc” methods because they are used to interpret the outputs rather than the design. For the employee of tomorrow, a focus on specialist knowledge must lead their career navigation in order to remain relevant. Without specialist knowledge it will be increasingly harder to gain accountability over strategy or process. Universities have started to acknowledge that their own relevance is tied to this fundamental truth as the economy start to gain benefits from the efficiency gains delivered by autonomous solutions.

The AI Index survey conducted in 2020 suggested that the world’s top universities have increased their investment in AI education over the past four years. The number of courses that teach students the skills necessary to build or deploy a practical AI model on the undergraduate and graduate levels have increased by 102.9% and 41.7%, respectively. More AI PhD graduates in North America chose to work in industry in the past 10 years, while fewer opted for jobs in academia, according to an annual survey from the Computer Research Association (CRA). The share of new PhDs who chose industry jobs increased by 48% in the past decade, from 44.4% in 2010 to 65.7% in 2019. By contrast, the share of new AI PhDs entering academia dropped by 44%, from 42.1% in 2010 to 23.7% in 2019.

In the last 10 years, AI-related PhDs have gone from 14.2% of the total of CS PhDs granted in the United States, to around 23% as of 2019. At the same time, other previously popular CS PhDs have declined in popularity, including networking, software engineering, and programming languages. Compilers all saw a reduction in PhDs granted relative to 2010, while AI and Robotics/Vision specialization saw a substantial increase. So, what career advice should we give future generations of children? One answer could be to pursue professions that machines are currently bad at, and therefore seem unlikely to get automated soon. Recent forecasts for when various jobs will get taken over by machines identify several useful questions to ask about a career before deciding to educate oneself for it. For example:

  • Does it require interacting with people and using social intelligence?
  • Does it involve creativity and coming up with clever solutions?
  • Does it require working in an unpredictable environment?

Theoretically, the more of these questions that can be answered with a yes, the better a career choice is likely to be. In contrast, jobs that involve highly repetitive or structured actions in predictable setting are not likely to last long before getting automated away. Computers and industrial robots’ tool over the simplest such jobs long ago, and improving technology is in the process of eliminating many more, from telemarketers to warehouse workers, cashiers, train operators, bakers, and line cooks. Drivers of trucks, buses, taxis and Uber/Lyft cars are likely to follow soon. There are many more professions (including credit analysts, loan officers, bookkeepers, and tax accountants) that, although they are not on the endangered list for full extinction, are getting most of their tasks automated and therefore demand many fewer humans.

Right now, we face the choice of whether to start an AI arms race, and questions about how to make tomorrow’s AI systems bug-free and robust. If AI’s economic impact keeps growing, we also must decide how to modernize our laws and what career advice to give kids so they can avoid soon-to-be-automated jobs. If AI progress continues to human levels, then we also need to ask ourselves how to ensure that it’s beneficial, and whether we can or should create a leisure society that flourishes without jobs. This also raises the question of whether an intelligence explosion or slow-but-steady growth can propel AGI far beyond human levels. If we can figure out how to grow our prosperity throughout automation without leaving people lacking income or purpose, then we have the potential to create a fantastic future with leisure and unprecedented opulence for everyone who wants it.

Learn more about ACEDS e-discovery training and certification, and subscribe to the ACEDS blog for weekly updates.

Chip Delany on Email
Chip Delany
STRATEGY DIRECTOR AT at LINEAL SERVICES
Strategy Director at Lineal Services, previously worked as a strategist for Legal AI tech firm NexLP and before that as a consultant in continuous improvement and labor modelling. Australian National and US permanent resident.

Share this article