Kangaroo Court

Kangaroo Court: Ideas for Creating Artificial General Intelligence: Integrated Information Theory

Share this article

The ultimate endgame in the development of artificially intelligent (AI) systems is the creation of intelligent solutions that can input data from a variety of sources, process it (understand it), and perform multiple output operations. Imagine you are sitting on your couch at home next to a device with Siri activated. However instead of requesting simple tasks like adding almond milk to your shopping list, you could have a substantive conversation about the meaning of life, experiencing a dialogue with the Siri application that was indistinguishable from the kind you would have with a close friend or loved one.

Using the world’s knowledge, a generally intelligent Siri could reach to the depths of the internet in a heartbeat, assess all available information about cosmology, metaphysics, psychology, freewill, and spirituality. This experience would be the dawn of an innovation age where you would no-longer need to spend $400 an hour visiting your psychiatrist. Diagnostics could be performed by an interactive intelligence like Siri from the comfort of your living room, and a fraction of the price. There would also be no requirement to travel across town, battle to find parking, and sit in a waiting room until your therapist is ready.

The benefits of this technology would not only remove the need to commute – it would reduce the time limit on therapy sessions and make valuable mental health services available to anyone with an internet connection. Suddenly the challenge of sourcing mental health professionals for lower income communities would be a problem of the past. Families in the poorest neighborhoods would have access to world-class therapy, provided an internet connection. Of course, this application of future generally intelligent systems does not begin to represent the tip of the iceberg.

A more adept analogy would be to compare the intelligence of an ant to a human being. Ants are much better than humans at lifting things. Leafcutter ants can lift to 50 times their bodyweight and carry it back to the colony. They are simply much better at performing this narrow function than we are. For example, I weigh 240 pounds. If I was to perform this narrow function at the standard of a Leafcutter ant, I would be able to lift 12,000 pounds above my head and casually walk it several miles home. For my friends outside of the United States, the conversion is around five and a half tons. The end game of AI would be to flip positions, so to speak. In the new analogy, we would be the ant (intellectually speaking). Electrical circuits can compute information around a million times faster than biochemical ones.

To clarify, we current live in the world of narrow AI. Sometimes this is referred to as weak AI, but I prefer to use the term narrow given that many of the achievements over the past decade have been narrowly powerful. It seems at confusing to use language which describes the efforts of Google’s DeepMind as both weak and powerful. Narrow AI specializes in one area. It is machine intelligence that equals or exceeds human intelligence or efficiency, but in one specific domain. Smartphone apps, spam filters, Google translate, and Google search are all examples of narrow AI. Beyond the world of narrow AI we enter into the realm of Artificial General Intelligence (AGI). AGI refers to a computer that is as smart as a human across the board and that can perform any intellectual task that a human being can. Just a hella of a lot faster.

Finally, it is worth noting one final frontier that is as fascinating as it is controversial in many academic and theoretical circles: Artificial Super-Intelligence (ASI). ASI is an intellect that is much smarter than the best human brains in practically every field, including scientific creativity, general wisdom, and social skills. ASI ranges from a computer that is just a little smarter than a human to one that is trillions of times smarter – across the board. There are billions of dollars being invested in this direction for several reasons, the most obvious of course being the power of an all-knowing supercomputer that can perform any action following a simple command. It is unclear how one would control an intelligence so much further advanced than our own. To revert to my earlier analogy, who among us would listen to the commands of a Leafcutter ant? The risk here is a self-aware intelligence that can improve upon its own coding. In the AI world this is often referred to as an explosion of intelligent called the “Singularity”. Like the atom bomb, there would be no going back. So let’s have a brief look at one of the current efforts to bridge the gap from a world of narrow AI to the future of generally intelligent AGI.

In 2004, neuroscientist Giulio Tononi proposed a new theory he termed Integrated information theory (IIT). In his seminal paper on the subject “An information integration theory of consciousness”, Tononi attempts to explain what consciousness is and why it might be associated with certain physical systems. Given any such system, the theory predicts whether that system is conscious, to what degree it is conscious, and what experience it is having. According to IIT, a system’s consciousness is determined by its causal properties and is therefore an intrinsic, fundamental property of any physical system.

The background of Tononi’s theory proposes that consciousness poses two main problems:

  1. Understanding the conditions that determine to what extent a system has conscious experience.
  2. Understanding the conditions that determine what kind of consciousness a system has.

Tononi’s hypothesis presents a theory about what consciousness is and how it can be measured. According to his theory, consciousness corresponds to the capacity of a system to integrate information. This claim is motivated by two key phenomenological properties of consciousness:

  1. Differentiation – the availability of a very large number of conscious experiences.
  2. Integration – the unity of each such experience.

The theory states that the quantity of consciousness available to a system can be measured as the Φ (Phi – the value of integrated information systems) value of elements. Φ is the amount of causally effective information that can be integrated across the information weakest link of a subset of elements. A complex is a subset of elements with Φ>0 that is not part of a subset of higher Φ. The theory also claims that the quality of consciousness is determined by the informational relationships among the elements of a complex, which are specified by the values of effective information among them. Finally, each conscious experience is specified by the value, at any give time, of the variables mediating interactions among the elements of a complex.

According to Tononi, the IIT accounts, in a principled manner, for several neurobiological observations concerning consciousness. These include the association of consciousness with certain neural systems rather than with others; the fact that neural processes underlying consciousness can influence or be influenced by neural processes that remain unconscious; the reduction of consciousness during dreamless sleep and generalized seizures; and the time requirements on neural interactions that support consciousness. The implication of the theory entails that consciousness is a fundamental quantity, that it is graded, that it is present in infants and animals, and that it should be possible to build conscious artifacts.

In further works, such as his 2005 paper ‘Consciousness, information integration, and the brain’, Tononi continues to further his position through describing clinical observations he claims have established that certain parts of the brain are essential for consciousness whereas other parts are not. For example, different areas of the cerebral cortex contribute different modalities and sub modalities of consciousness, whereas the cerebellum does not, despite having even more neurons. Tononi continues by stating it is well established that consciousness depends on the way the brain functions. For example, consciousness is much reduced during slow wave sleep and generalized seizures, even though the levels of neural activity are comparable or higher than in wakefulness.

Tononi’s theory has found both interest and criticism alike. Some computer scientists of course argue that approaching from a neurological perspective of consciousness is the wrong course of action, and a more pragmatic engineering approach that seeks to integrate input/output functions will breed more robust innovation and reliable long-term success. However, this would not solve the riddle of AGI until a solution is created which can truly operate at human levels. That is, pass the test proposed by English Cryptographer and Mathematician Alan Turing, but on a practical level for the adoption of pragmatic buyers. IIT is a good start to a problem that presents both risk and opportunity at a level that we have yet to experience in the digital age. The step from narrow AI to AGI is a monumental task, and one that may not be achieved for some time. The real questions is determining the rate of self-improvement that occurs after AGI is achieved which has AI ethicist and the likes of Elon Musk, Nick Bostrom, and Sam Harris holding warning signs. I encourage you to follow the links and listen to their arguments for yourself. If we cannot think at the rate of an AI system, how can we consider all the risk variables associated with its own development and learning apparatus. Narrow AI is wonderful, but many years from now (hopefully), we will need to approach challenges like AGI will great care.

Chip Delany on Email
Chip Delany
STRATEGY DIRECTOR AT at LINEAL SERVICES
Strategy Director at Lineal Services, previously worked as a strategist for Legal AI tech firm NexLP and before that as a consultant in continuous improvement and labor modelling. Australian National and US permanent resident.

Share this article