Extract from Samuel A. Lewis’ article “The Hacker’s Perspective on Abstract Thought and Hallucination Hazards”
With the rise of chatbots—computer programs designed to simulate human conversation—and now LLMs, many believe that a computer using LLMs would be able to convince the interrogator that it was human, and thus, pass the Turing Test. GPT-4.5 and other LLMs are capable of convincingly mimicking human conversation. Can computers think? More importantly, are computers capable of abstract thought? Such questions challenge the very idea of what it means to “think,” and these are questions that are still at the heart of what it means to be an “artificial intelligence.”
In a paper published in 1950, Alan Turing posed the question “can machines think?” However, since “thinking” is difficult to define, Turing instead proposed “The Imitation Game,” a three-person game in which an interrogator asks questions of two unseen subjects, a man and a woman, in an effort to determine the gender of the two players. Taking that basic game, Turing added a twist: if one of the two players was replaced with a computer, would the interrogator be able to determine which was the computer? This test became known as the “Turing Test.”