Toward the end of 2018 a global research and advisory firm, Gartner, ran a survey of more than 3,000 CIOs. The purpose was to understand the trends in digital commerce. The findings provided a comprehensive overview of disruption and business line activity, including increased investment into IT infrastructure, the digital workforce, and scaling digital business. However, when asked which technology they expected to be most disruptive, Artificial Intelligence (AI) proved the largest margin.
This trend has been common to industry reports by several other groups, including management consulting firms like McKinsey and Boston Consulting Group who have invested significant time into collecting data points related to adoption across industries. The Gartner survey told a similar story, finding that 37% of the organizations surveyed had either already deployed AI into their business or would be in the near term.
Commitment from c-suite is table stakes for getting an organization started on its journey to AI adoption. Without the resources and collective support of the executive team, any AI initiative is likely to fail or at the very least remain a pet project for digitally savvy employees who appreciate it. For firms holding back on exploring and implementing AI solutions across business lines, the great danger is not just being left behind by the competition, but the loss of human capital as employees recognizing the benefits transfer to likeminded organizations where they feel they can succeed.
It is fair to suggest that any forward-thinking employee would be considering their development alongside the environment they work in. People want their hands on the best tools so they can make the best decisions and improve their outcomes. If you are working in an organization which is overly conservative in its pursuit of advanced technologies, you are likely watching your industry peers accelerate ahead of you in leaps and bounds.
The problem remains that many organizations remain frozen – a deer in the headlights – confused about what the technology is and, perhaps more importantly, where to begin. The key is to understand AI as a broad term covering several subsets or types of AI. These subsets can be divided by the type of technology required – some require machine learning, big data, or natural language processing (NLP), for instance. These subsets can also be differentiated by the level of intelligence imbedded into an AI machine (more commonly known as a robot).
To keep things simple, I want to zoom-out as much as possible and provide an easy to digest overview of the field. Below I have listed four core types of AI; they range in complexity from real-world basic applications to world changing breakthrough. The list could arguably be shorted to three – artificial narrow intelligence, artificial general intelligence, and artificial super intelligence – however the topics of AGI and ASI are too broad and ethically dense to casually list without further investigation. It is fair to say they require a discussion all to themselves.
1. Reactive Machines
Reactive machines are the simplest level of robot. They cannot create memories or use information learnt to influence future decisions – they are only able to react to presently existing situations.
IBM’s Deep Blue, a machine designed to play chess against a human, is an example of this. Deep Blue evaluates pieces on a chess board and reacts to them, based on pre-coded chess strategies. It does not learn or improve as it plays – hence, it is simply ‘reactive’.
2. Limited Memory
A limited memory machine is able to retain some information learned from observing previous events or data. It can build knowledge using that memory in conjunction with pre-programmed data. Self-driving cars for instance store pre-programmed data – i.e. lane markings and maps, alongside observing surrounding information such as the speed and direction of nearby cars, or the movement of nearby pedestrians.
These vehicles can evaluate the environment around them and adjust their driving as necessary. As technology evolves, machine reaction times to make judgements have also become enhanced – an invaluable asset in technology as potentially dangerous as self-driving cars. Improvements in machine learning also helps autonomous vehicles to continue to learn how to drive in a similar way to humans – through experience over time.
3. Theory of Mind
Human beings have thoughts and feelings, memories or other brain patterns that drive and influence their behavior. It is based on this psychology that theory of mind researchers work, hoping to develop computers that are able to imitate human mental models. That is – machines that can understand that people and animals have thoughts and feelings that can affect their own behavior.
It is this theory of mind that allows humans to have social interactions and form societies. Theory of mind machines would be required to use the information derived from people and learn from it, which would then inform how the machine communicates in or reacts to a different situation.
Self-awareness AI machines are the most complex that we might ever be able to envision and are described by some as the ultimate goal of AI.
These are machines that have human-level consciousness and understand their existence in the world. They don’t just ask for something they need, they understand that they need something; ‘I want a glass of water’ is a very different statement to ‘I know I want a glass of water’.
As a conscious being, this machine would not just know of its own internal state but be able to predict the feelings of others around it. For instance, as humans, if someone yells at us we assume that that person is angry, because we understand that is how we feel when we yell. Without a theory of mind, we would not be able to make these inferences from other humans.