Relating the Chinese Room thought experiment to LLMs.

There is an interesting discussion to be had about LLMs. How is it that what’s essentially a word predictor describes so much of human experience, mimics knowledge so well, and even gives the impression that it can learn? There are many debates over whether what we’re witnessing can be described as intelligence.

Funny enough, many of these discussions have taken place before. I am thinking of John Searle’s rebuttal of the Turing test called the Chinese Room. In this thought experiment, a person who doesn’t understand Chinese is locked in a room with a large rule book and symbols. They receive Chinese symbols, follow the rules to manipulate them, and send out a response in Chinese. From the outside, it appears the room understands Chinese. Still, the person inside has no comprehension; they are merely manipulating symbols according to instructions, similar to a computer running a program.

Many people smarter than me have offered many retorts to these arguments. But philosophically, I wonder if it really matters whether the program is simply manipulating symbols or really understands what it says. If your goal is to communicate with this program in perfect Chinese, then the program fulfils its utility. The same goes for LLMs.