The future of AI could hinge on two philosophical concepts

Bing with ChatGPT
(Image credit: Microsoft)

Over the past few weeks, AI chatbots have both fascinated and frustrated a wide range of early adopters. Big corporations have lauded them as the next big thing in search engine technology; journalists have tested their accuracy and intelligibility; everyday users have probed the limits of what they know.

Up until now, artificial intelligence has remained within the purview of experts, enthusiasts and the occasional sci-fi fan. Over the next few years, though, it seems entirely possible that AI could become a topic of mainstream discussion — and a contentious one, at that.

To add anything substantive to the discourse, everyday users will have to familiarize themselves with some basic AI concepts. And to do that, they’ll first have to understand whether the chatbots even possess meaningful artificial intelligence in the first place. The question isn’t nearly as straightforward as it may seem.

In spite of what Microsoft may say about Bing with ChatGPT, or what other companies claim about their similar products, calling these services “artificial intelligences” in the first place is a bit of a misnomer. And to understand why, we need to look outside of the technological sphere and into the weird and wonderful world of philosophy. To learn about the limitations of current-gen AI, and whether we’ll ever be able to transcend those limits in the future, it’s helpful to consider two philosophical concepts: solipsism and the Blockhead thought experiment.

The uncertainty of solipsism

Graphical representation of a cybernetic brain

(Image credit: Shutterstock)

To delve into the philosophy of AI, I spoke with Cameron Buckner, an associate professor of philosophy and cognitive science at the University of Houston. Dr. Buckner has worked on everything from corvid cognition to metaphysics, but his research into neural networks and machine learning makes him the ideal candidate to discuss artificial intelligence, and its relationship with humans.

In a recent piece about Star Trek and AI chatbots, I wrote that one of the primary problems with artificial intelligence is that we can never be absolutely certain whether the entity in question possesses any real cognition. As it turns out, this isn’t a problem exclusively for AI; it affects other people and animals as well. In philosophy, this concept is called “solipsism.”

“Solipsism, at least in this particular context, is the belief — or maybe worry — that I’m the only one with a mind,” Buckner said. “I know my own mental states in some kind of special way that I don’t know the mental states of others. I know that I have a mind in a way that I don’t know other agents have minds. It’s a very interesting philosophical problem that’s been discussed all the way back to ancient Greece.”

Buckner pointed out that while solipsism is a problem for humans and animals, it’s perfectly reasonable to infer that other living beings must have cognitions similar to our own. Even ravens appear to understand that both they and their conspecifics are autonomous agents, according to Buckner’s research.

When it comes to AI, however, the situation seems much less clear. Learning information from another person is a fairly reliable process, since we can reasonably infer that other people have drawn justifiable conclusions based on a logical sequence of events, just like us. AIs, however, don’t learn like that at all. Instead, AIs try to find patterns in word sequences and respond algorithmically.

“In the case of these [chatbot] language models … what they have is a bunch of text. We can give them cues in the text, and text typically does contain such cues about the mental state of the person who’s speaking or writing,” Buckner said. “One of the questions we all have now is, should we be attributing mental states to these language models that are capable of this flexible kind of approximate copying behavior? It looks like they can give appropriate responses to a very, very wide range of social interactions with humans. And, can they infer our mental states when they’re interacting with us, from purely textual evidence? Even though they don’t have the same type of inner mental life that we do.

“We at least know that much,” he asserts. “They don’t have emotions. They don’t have an autobiographical memory in the way we do. They don’t have long-term projects.”

Buckner’s “autobiographical memory” point is perhaps the most salient thing to keep in mind about AIs. Our defense against solipsism regarding other humans — and to some extent, animals — is that we know that each person has a consistent set of memories and experiences that influences their disposition and behaviors. The technical term for this phenomenon is “psychological continuity,” and it’s intricately linked with the idea of consciousness. We are conscious agents because we are aware of our own thoughts and actions in both the present and the past. AI chatbots, by and large, can’t say the same.

“The large language models are trained on the entire text of the Internet,” Buckner said. “They’re not a kind of individual super-agent. It’s rather like a slurry of agents. Like you took hundreds of thousands of coherent agents, and you put them in a blender and stirred it up.”

AI chatbots don’t have humanlike cognition in their current state. But their architecture suggests that even in the future, any kind of inner life, as we understand it, is a dim possibility. If AIs have no psychological continuity, then they can’t reason like living things can, if they can reason at all. And if AIs can’t reason, then any conclusion they draw, apart from trivial regurgitation of information, is suspect.

If solipsism is the fear that no one else has a conscious mind, then the fear appears to be justified for AI chatbots.

A Blockhead’s intelligence

artificial intelligence

(Image credit: Kindel Media)

The Blockhead thought experiment represents another serious hurdle in ascribing agency to AIs. Like solipsism, it challenges us to think about whether other entities have inner lives — and whether it matters if they do.

“The Blockhead thought experiment is this idea going back to the earliest days [of AI] when we saw that you could fool humans into thinking you were intelligent just by having a good stock of canned responses,” Buckner explained. “What if you just scaled that up indefinitely?

“Any conversation you have with one of these systems is going to be finite. There’s a finite number of things you can say to it, and a finite number of things it can say back to you. At least in principle, it could be explicitly programmed as a kind of lookup table. The same way that the kid who doesn’t really want to learn how to do long division and wants to do well on the quiz might just memorize a bunch of common answers … without ever actually learning how to do the long division process. It’s like that, but for everything.”

Most readers have probably heard of the Turing test, which cryptographer Alan Turing devised in 1950 to determine whether machines could exhibit human intelligence. Without rehashing the whole experiment here, the idea is that a human and a computer would communicate, and that a human observer would try to determine which participant was which. If the observer could not tell, then the computer would pass the test. Whether doing so proved a computer’s “intelligence” is up for debate, but the Turing test is still a useful shorthand for machines that aim to mimic human behaviors.

Ned Block, the philosopher who first proposed the Blockhead experiment (although not under that name), argued that any program with a sufficiently diverse range of responses could reliably pass the Turing test, even though doing so would not demonstrate any kind of actual intelligence. Instead, the program would essentially be an extremely intricate spreadsheet, picking the most “sensible” response based on algorithmic logic.

The idea of a program with an essentially infinite cache of stock answers was far-fetched in the early days of AI technology. But now that chatbots can essentially access the whole Internet to craft their responses, what we have sounds an awful lot like a Blockhead computer.

“The Blockhead thought experiment is meant to decisively rebut [the Turing] test as a test for intelligence,” Buckner said. “Just by having canned responses to everything preprogrammed in a lookup table. That is a real threat today with these deep learning systems. It seemed like an ‘in-principle’ threat or a thought-experiment-type threat rather than an actual engineering threat, until we had the systems that have the memory capacity to memorize huge swaths of the Internet.”

Block used this thought experiment to argue for a philosophical concept called “psychologism,” which maintains that the psychological process by which an entity synthesizes information is important. In other words, a disciple of psychologism would argue that a Blockhead computer is not intelligent, because consulting a lookup table is not the same as reasoning through a problem. (Block presented this idea in contrast to another philosophical concept called “behaviorism,” although the two are not always mutually exclusive.)

“[An AI] could have the exact same responses as a human, and yet it’s not intelligent, because it’s not generating them by intelligently processing the information,” Buckner said. “We need to actually probe what’s going on under the hood in these systems to see if they’re doing some intermediate, representational processing in the way that we would.”

Under a psychologistic approach, nothing your AI chatbot tells you is an original idea, even if it comes up with a phrase, song lyric or story concept that no one’s ever used before. With a complex enough algorithm, and a big enough source of information, it can essentially bluff its way past any query without ever applying real reason or creativity.

“Very confident nonsense”

ChatGPT chatbot AI from Open AI

(Image credit: Shutterstock)

To make matters even more complicated, there doesn’t seem to even be a consistent definition of what constitutes artificial intelligence. Microsoft and other companies seem to have settled on AI as “procedurally generated responses to naturalistic queries.” A sci-fi fan, on the other hand, might insist that AI must refer to a single autonomous entity, like Lt. Cmdr. Data in Star Trek, or EDI in the Mass Effect trilogy.

“What is artificial intelligence?” asked Buckner, paraphrasing Turing. “What is artificial thought? If you ask the philosophers, they’ll tell you 20 different things, and it’s really hard to see which is the right answer. It’s a very amorphous concept; it seems to be kind of evaluative. Looking in the dictionary doesn’t seem to be the right solution, because the dictionary could be wrong.”

On the other hand, pinning down an exact definition for AI — particularly the “I” — doesn’t seem to be the primary purpose of this current generation of chatbots. They’re already out there in the world, ready to interact with people. Even so, Buckner believes that even in its current iteration, AI has the potential for serious misuse, from individual impersonation to national propaganda.

“There are a lot of people, even deep learning pioneers … who are very pessimistic about what the current generation of chatbots are useful for, if they’re not actually thinking,” he said. “I’m a little more worried about a lot of misuse of these models. Given a little bit of prompting, you can get it to generate enormous amounts of pretty coherent text. That, you could use for misinformation purposes, you could use for persuasion campaigns, you could use to impersonate people, you could use for phishing. I think the people who say that there aren’t a lot of very dangerous potential misuses of this technology are also very wrong.

“There’s also a lot of fun uses,” Buckner allowed. “I don’t want to downplay the fun uses.”

As an example, a colleague of his — a metaphysician — recently got engaged, and Buckner asked ChatGPT to come up with some funny suggestions for the congratulations card. Buckner called the subsequent jokes and puns “brilliant,” and perhaps even better than the ones he would have thought of by himself. He also discussed how AI chatbots can write entertaining song lyrics, or accurately mimic regional dialects.

However, in their current form, we can’t ascribe any agency to AI chatbots, and we can’t count on them to come up with anything truly original.

“If you’re really asking [an AI] to solve some new problem or do some kind of serious analysis that it can’t just look up on the Internet,” he said, “it’s going to give you very confident nonsense.”

Marshall Honorof

Marshall Honorof is a senior editor for Tom's Guide, overseeing the site's coverage of gaming hardware and software. He comes from a science writing background, having studied paleomammalogy, biological anthropology, and the history of science and technology. After hours, you can find him practicing taekwondo or doing deep dives on classic sci-fi.