Herbert Roitblat: Language Models Not Artificial General Intelligence

Extract from Herbert Roitblat’s article”Language Models Not Artificial General Intelligence”

When distinguished artificial intelligence experts like Peter Norvig and Geoffrey Hinton make claims that large language models are approximating or beginning to demonstrate artificial general intelligence, one has to take these claims seriously, at least at some level.  In what possible universe could they be right?  In other words, what theory would license such claims?  How could a model that is built to predict the next word be considered an example of general intelligence?

Arcas and Norvig list some of the achievements that support their view that large language models are at least primitive examples of artificial general intelligence. The core of their claim is that these models “perform a wide variety of tasks without being explicitly trained on each one.”  What they fail to recognize is that all of the varied tasks that they claim these models can solve are all variants on the same task:  Given the context of previous words, predict the next word.  The models appear to be solving multiple tasks, but they are actually only solving the word prediction one.  Give the model the “right” context, and it will produce the “right” response, where “right” means similar to a pattern learned during training.

The “multiple” part of multiple tasks is in the eye of the beholder.  The language model just solves the single narrow task on which it was trained.  If an example of the problem or a similar one exists in the billions or trillions of tokens on which the model was trained, then that is all that is needed to solve what some people think of as multiple problems.  Occam’s razor recommends adopting the simpler theory—that the conditional probabilities of the language model are enough to produce the observed result.  Neither artificial general intelligence, nor any other cognitive process is needed.

Read more here

ACEDS