Herbert Roitblat: Claims That Large Language Models Achieve Higher Cognitive Functions Are Bogus

Extract from Herbert Roitblat’s article “Claims That Large Language Models Achieve Higher Cognitive Functions Are Bogus”

All of the conclusions that large language models (GPT-type models) demonstrate advanced cognitive processes (for example, reasoning) are based on faulty logic.

Large language models, such as ChatGPT, GPT-4, and others, have been described as being capable of remarkable cognitive abilities, including logical and causal reasoning, common sense, arithmetic, creativity, emotional understanding, theory of mind, narrative understanding, truthfulness, reasoning by analogy, and abstract pattern induction among many others. These claims are even more remarkable given that the models were explicitly built by having a machine learn to predict the next word given a context. The apparent cognitive mechanisms were explicitly excluded from the modeling, so it is not at all obvious where they could come from. Some say that these cognitive processes emerge from the complexity of the models (presumably the number of parameters and layers), but this putative emergence is little more than a claim that “then a miracle occurs.”

Extraordinary claims should require extraordinary evidence, but the actual evidence is just that the models “pass benchmark tests” of these capabilities, without any consideration of how else the models might pass the test.  Their success on these tasks is an example of the logical fallacy of affirming the consequent.

Read more here

ACEDS