Return to the Chinese Room

In an earlier article, I had this to say about the famous Chinese Room of Robert Searle:

Robert Searle asks the following question: suppose you had a room that could pass the Turing Test. Written questions in Chinese are passed into the mail slot of a room, and, after a while, a written answer comes out, and the Chinese reader is satisfied that the answers are intelligent. Inside the Chinese room, however is nothing but a series of filing cabinets cards on which are written Chinese characters, and a notebook or set of notebooks with a set of rules. In the room is a man who does not read Chinese. The rules tell the man when he sees a note, and the first ideogram is a (to him) meaningless squiggle of a certain shape, to go to a specific cabinet, open a certain file, go to a certain page, copy the character written there, go to another page copy that character, and so on.  The rules can be as complicated as you like. The man sees the second ideogram of such-and-such a squiggle, he is to go not to file A but to file B, open folder 1, copy page 3, and so on.

We can easily imagine the opening of any such a bit of “Chinese Room” dialog. If ideogram A means “How are you?” open file 1, page 1, where is written ideogram B, which means “I am fine; how are you?” To the man in the room, the conversation is without any understanding. Ideogram A provokes reaction B. That is all the dialog means to the man. To the Chinese speaker, however, the Chinese Room seems quite polite. When you ask it “How are you?” the empty room replies “I am fine, how are you?”

Does the man walking from file to file understand Chinese, no matter how intricately the rules are that he follows? The answer is no. Do the filing cabinets understand Chinese? No. Searle argued that a computer that could pass the Turing Test was nothing more or less than a Chinese Room, something that reacted but could not act, something that looked like it understood, but did not understand.

Now, much ink has been spilled over the meaning of the Seale thought-experiment, and, in my humble opinion, all of it wasted ink. Searle (and his supporters) say that the thought experiment proves that the man in the room need not understand Chinese in order to pass the Turing Test. This means that the Turing Test does not actually test for consciousness. Turing (and his supporters) say that the room “as a whole” (whatever that means) “understands” (whatever that means) the Chinese language, and that it means nothing in particular the man himself does not understand Chinese. Does one braincell in the brain of an English speaker understands English? Both are missing an obvious point. Both are arguing about whether a letter understands what is written in the letter. Whoever filled the filing cabinets and wrote the grammar rules for the Chinese Room understands Chinese. The letter-writer understands the letter, not the piece of paper.

Turing and his meditations on whether computers would be aware if they seemed to an observer to be aware never seems to rise above this crudest imaginable materialism: they never seem to contemplate that computers have to be programmed by someone. The Chinese Room is not “polite” if rule one is to answer meaningless squiggle in file A “How are you?” with meaningless squiggle in file B “I am fine; how are you?” : The only person who is polite is the Chinaman, whoever he is, who wrote the ideogram, not meaningless to him, that he placed carefully and deliberately in file B. If the Chinaman, without any notice to John Searle (or whoever the poor boob is trapped in the Chinese Room) had written instead, “I am fine; you are a swine!” then the “Room” would be impolite.

The real question about the Chinese Room is whether or not speech that is not rote speech can be reduced to an algorithm. The real question, in other words, is whether John Searle, trapped in the Chinese Room, merely by following even absurdly complex rules of sentence construction, could coin a new term, or use an old word in a poetical way that showed insight, a new meaning not present before. Now, neologisms can indeed be coined by rote. Children make such coinages, usually in the form of cute mistakes, all the time. There is no reason the Chinese Room could not put “Ize” in file 5, and establish rule 101 “add file five to any word X” where the rules of X include those words we want to turn into verbs from nouns. “Nounize” “Vulcanize” “Paragraphize” are all coined terms that I have here and now Turingized. You might be able to guess their meaning. I have meaningized them.

Poetry is a different question. The whole point of poetic expressions is that a new aspect of meaning has been brought out of an unusual use of a word, or out of a new phrase. If it is something you can reduce to an algorithms, it is not poetry. Indeed, the thing that makes you wince when I use the term “meaningize” is the very lack of poetry in that coinage; it is a mechanical, predictable, soulless.

A reader with the tumultuous yet academic name of “Doc Rampage” writes in to ask: “I don’t understand your tone towards Searle. You agree with his argument but think he should not have made it? Why? You think it is waste of time trying to argue with materialists?

Let me answer by going to the various filing cabinets in my mind and piecing together pieces of paper according to an algorithm of rules. Hmm….

I agree with Searle as far as his argument goes, but I think having the argument at all gives undue (and misleading) credit to radical materialism. It is not that it is a waste of time to argue with materialists (it is. We should merely reprogram their brain atoms) it is that the terms of the argument grant the materialist an axiom I think it unwise to grant them—that the question of consciousness is an empirical rather than a moral question.

Searle’s position is the opposite of the Turing position: Searle’s position (if I understand it) argues that an empirical test for consciousness does not prove consciousness, because the computer or the Chinese Room can pass the empirical test for consciousness by rote without any conscious mind being present. Hence the test (Searle correctly concludes) does not actually test for the property being testing for.

Turing’s position (if I understand it) is that we know about consciousness the way we know about anything else, by empirical test. Any computer or Chinese Room that can pass the empirical test for consciousness, is “conscious” for all practical purposes as far as we are concerned.

I call both arguments misguided because there is no empirical test for consciousness.

One might as well argue about solipsism. We make a judgment call that some men are sane and other insane based on whether they are aware of the meaning of their acts, or oriented to time, place and person. The “test” in the Chinese Room of Searle only tests for whether the Chinaman who sets up the rules for the room is conscious: but we already know he is. The “test” for whether a computer can pass the Turing test is whether or not the computer programmer can cunningly enough anticipate and imitate a real-life conversation: but no one cares whether he can or not, since it has nothing to do with consciousness in a computer.

Myself, I am much more concerned with whether or not an allegedly intelligent computer can pass the McNaughton Test rather than the Turing Test. The legal standard for insanity was first established in the McNaughton case (where a paranoid man shot and killed the secretary of the British Prime Minister, believing that the Prime Minister was conspiring against him). The test is whether, “at the time of committing the act, the accused was laboring under such a defect of reason, from disease of the mind, as not to know the nature and quality of the act he was doing or, if he did know it, that he did not know what he was doing was wrong.”

This is a legal and moral and not an empirical test. We can call it an “empirical” test if and only if we assert (as some Philosophers do) that all knowledge is empirical—but this fails to distinguish between things that are truly empirical questions, such as the mass, length, duration, amperage, temperature, amount, or luminous intensity of a material object, and things that cannot possibly be reduced to such measurement, such as, for example, whether or not McNaughton at the time of the crime knew the nature and quality of the act, and knew it was wrong.

We know about consciousness because we are conscious. We know other people are conscious because only a sociopath treats other people like objects, like robots, like manikins: it is a conclusion of moral reasoning, not of empirical reasoning.

The reason why we are not all solipsists is not because there is or could be convincing empirical evidence that other souls are conscious. There can be no such empirical evidence, because “consciousness” is not something that can be reduced to or described in units of mass, length, duration, amperage, temperature, amount, or luminous intensity. The reason we are not all solipsists is that a self-consistent solipsist is a sociopath, a creature that treats other people like moving manikins, not as people. It is morally wrong to be a sociopath, ergo it is impractical and unjust to be a solipsist. Does this mean we, as philosophers, should be convinced solipsism is wrong? Not at all. We may entertain the notion of solipsism in the abstract, just as long as we do not allow that notion to influence our actions, lest we end as bloodthirsty, misanthropic, dangerous, empty-souled as Marxists, with eyes as dead and inhuman as theirs.

Hence, Searle is right but is off the point. The Chinese Room can seem to speak Chinese even when the Chinaman who set up the filing cabinets in the room is no longer present.

Turing is wrong but is off the point, way off the point. The Chinese Room “can speak Chinese” because it can fool my Chinese granny into thinking some living person is behind the mail slot in the door.

It is like listening to two philosophers debating about whether or not the book understands the words written in it, without ever once mentioning that the books are written by authors.