Cognitive Science

The Chinese Room

Introduction

Searle’s Chinese Room is a thought experiment that claims to support one of the premises of his argument against Strong Artificial Intelligence (Referred to here simply as AI). Strong AI is that type of artificial intelligence we would consider to be truly intelligent as opposed to one that is capable of clever tricks to simulate intelligence. I will argue that he is wrong in his claim to know that programs alone cannot have semantics and also that he is wrong in his claim that his argument, if successful, does not apply to classicism. In order to have this discussion we must first take a brief look at; classicism, the distinction between classicism and connectionism, and the Turing test as the goal of Classical AI.

 

Classicism
Classicism is the idea that the brain is fundamentally a symbol manipulation device operating as a digital computational device and that the mind is a program that runs on that device. Information is encoded as symbols in some physical sense. There are a set of rules by which those symbols can be manipulated and a set of functions by which to perform the manipulations. The initial symbol inputs are manipulated in such a way as to produce more complex symbol structures and so on. The rules are written in such a way as to respect the semantic contents that they encode. Classicists claim that in essence the brain is performing these functions on the symbols in the Language of Thought. (Bermudez 2014)

If this is the case then it supports the claim of classical AI proponents that any sufficiently programmed computational device literally is a mind. The program exists independent of the material in which it is substantiated. Any digital computational device capable of running a program complex enough to contain all of the necessary components is capable of intelligent behaviour in the same sense as humans. (Searle 1980)

Alan Turing is credited with having designed a test to determine whether a machine could be said to be exhibiting intelligence. It involves one person to act as the judge, one other human, and a computer. The easiest way to think about the test is to imagine each party using a messenger service to have a conversation. The judge interacts with both the human and program, asking each questions and receiving answers. The judge must then attempt to distinguish between the human and the computer. If in most cases the judge is unable to distinguish between them, then the program can be said to be behaving intelligently. (Block 1990)

So the goal of classical AI was to create a digitally computable program such that it is able to pass the Turing test.

 

What is Searle Arguing Against?
Searle set about attempting to disprove this theory by attacking the work of Schank. Schank was attempting to devise a program that could interpret stories in a human-like fashion. Specifically Schank was interested in Dreyfus cases, which are those cases where humans intuitively interpret information. Dreyfus was interested in the “microcompetencies” that a program would require in order to be able to understand words like ‘it’ or ‘the pyramid’ where the meaning is intuitive. (Clark 1989)

The example that Searle provided was a story about a man who orders a hamburger, the burger arrives burnt, the man complains and storms out without paying. He then asks did the man eat the burger. He then presents a similar scenario in which the man is very satisfied, pays for the hamburger and leaves a big tip. Then asks the same question. (Searle 1980)

According to Searle, AI proponents also believed that the machine was not just simulating understanding, but that it literally understood and that it provided an explanation for the human understanding of such stories. (Searle 1980)

Searle didn’t agree and came at them with his Chinese Room Argument which can be laid out as follows:

Proposition 1. Programs are purely formal (i.e., Syntactical).

Proposition 2. Syntax by itself is neither constitutive of nor sufficient for semantics.

Proposition 3. Minds have mental contents (i.e., Semantic contents).

Conclusion 1. Programs are neither constitutive of nor sufficient for minds. (Dennett 1987, p 324)

The room itself is intended to support proposition 2. Searle argues that propositions 1 and 3 are obviously true and need no support. Syntactical means that the program operates purely based on the physical properties of the symbols it uses. Semantic contents are the properties of understanding that our mental states are said to possess. (Churchland 1990)

Searle asks us to imagine ourselves in a room and unable to speak Chinese. In this room there is a book of rules on manipulating Chinese symbols, so precise that it always produces output identical to that of a native speaker, and stacks of Chinese symbols. There are two windows into the room. Through one we receive inputs in the form of strings of Chinese symbols and through the other we are to output strings of Chinese symbols having manipulated them according to the rules in the book. (Searle 1980)

At this point we are asked to consider whether the man in the room can be said to understand Chinese. If he cannot then proposition 2 must be true and the thesis of classical AI is false.

Undermining Classicism

The description I have given for the functioning of the brain as a digital computer is directly analogous to what the man in the room is doing with the symbols and the rule book.

If the brain operates in the same way as the Chinese Room, then how can a brain possibly produce mental states that are possessed of semantic content?

Interestingly, Searle doesn’t actually argue that this should apply to classicism. For Searle it is enough that the program in this case is operating on the particular substrate that is the brain. The brain, he says, is possessed of the particular causal powers that are necessary to give rise to semantic content. He thought it was entirely possible, in fact in the original article he argued for the position, the brain does in fact operate as a digital computer. The reason that an artificial mind cannot be produced on a mechanical computational device, is that the device lacks these causal powers semantic content. (Searle 1990)

On the grounds that he does not provide a description of these causal powers I reject the claim that his argument does not apply to classicism. Searle would have us believe that there is something about the brain, which he cannot describe to us, that causes the brain to be immune to his argument on the basis of its apparent ability to produce semantics. (Searle 1980 and 1990)

With that position so rejected, the argument applies directly to classicism and if successful undermines both classicism and classical AI.

Analysing the Room

Here I will discuss some of the arguments against Searle and how Searle has responded to them.

The systems reply states that while the man in the room does not understand Chinese, the room as a whole does. This is because the room takes an input and produces a semantically appropriate output. It will do so at speeds much below those of a true native speaker, but it will do so and as such can be considered to be behaving intelligently. (Searle 1980)

Searle responds to this reply by asking us to further imagine that the man inside the room has instead memorised the rules and symbols such that the entire system is contained within him. We look again for understanding and are not supposed to find any. It is at this point that the argument begins to become unbelievable. It is exceptionally difficult to imagine ourselves in the position of having memorised the staggering amount of information required here. Every Chinese character along with every rule required to manipulate them in a semantically appropriate manner for every possible combination is beyond my ability to comprehend. If I introspect on my own apparent knowledge of English I cannot imagine being capable of systematising my ability to communicate into a set of identifiable rules and memorising them in the way that the man in the room would need to do. (Searle 1980)

The Robot Reply states that what is needed to produce these semantic contents is the ability for the system to interact with the outside world. By behaving in a semantically appropriate way, rather than just responding, our system can be said to understand. (Searle 1980)

Searle responds by asking us to imagine the room embedded within a robot who has all of these relevant interactive functions and now instead of Chinese we are processing the necessary information for the robot to interact with the world. Again he asks us to look for understanding and we are supposed to find none. (Searle 1980)

Here is where I think it becomes most obvious that Searle is not interested in whether a program could have semantics but whether a program could be conscious of those semantics. The robot is interacting with the world in such a way as to be indiscernible from a regular human. Its intentionality exists in the fact that it is behaving in semantically appropriate ways when interacting with objects. You ask the robot to pass you the salt, it may not be conscious of what salt is, but the robot is capable of passing you the salt. It understands the word salt in such a manner as to be capable of behaving in a semantically appropriate way to your request. Here again I think Searle’s response has failed. (Dennett 1980)

The Dennett Reply refers to the whole exercise as an intuition pump, which he describes as a device for provoking a family of intuitions by producing variations on a basic thought experiment. He says that “Searle relies entirely on ill-gotten gains; favourable intuitions generated by misleadingly presented thought experiments.” (Dennet 1980 p429)

Dennett asks us to imagine a further change to the original thought experiment. Combine the responses to the system and robot replies such that the man is simultaneously the robot and has internalised the room. In the above example of passing the salt. Is it at all intuitive to say that: upon hearing the input in Chinese for “Pass the salt”. Then performing the necessary manual symbol manipulations and outputting the appropriate orders in Chinese to himself. He can then manually manipulate the robots controls to pick up and pass the salt. Is it possible that at no point has any part of the system understood any part of the interaction? Is there any way that you can imagine this system never learning the semantic relation between the symbol for salt and the salt itself? The answer to that question Dennett and I agree, is no. (Dennett 1980)

Here I think Searle could only respond by claiming that the man-room-robot does not directly experience the world but is busy performing the symbol manipulations. That doesn’t seem very convincing to me though, so constructed the man is the robot and the room simultaneously. How could it be possible for the man to experience enough of the world to pass himself symbols to manipulate, but not enough to form semantic relations?

The Churchland Reply constructs an argument of the same form and an accompanying man in a room thought experiment by way of support.

Axiom 1. Electricity and magnetism are forces.

Axiom 2. The Essential property of light is luminance.

Axiom 3. Forces by themselves are neither constitutive of not sufficient for luminance.

Conclusion 1. Electricity and magnetism are neither constitutive of nor sufficient for light. (Churchland 1990, p 35)

The axioms match the propositions in the original argument from Searle quoted above, although in a different order. Axiom 3 is proposition 2. However it is easy to see that this argument is of the same form as the original. Much that same as proposition 2 was supported by the Chinese room, axiom 3 is here supported by the Dark Room. In the Dark Room there is a man holding a magnet or other charged object, he waves the magnet about in order to create waves, and obviously fails to produce light. Churchland leads us through some similar responses to those aimed at the original argument. Those very briefly are: Electromagnetic waves are being produced but not at the appropriate specifications for human vision. The frequency of oscillation is several orders of magnitude too low. And the response that follows below.

The crux of the reply is that whether the room or the person in it is capable of semantic understanding or not, Searle cannot just assert that it is not without evidence of what it is that makes something capable of such understanding. Of course Searle repeatedly asserts that it is the particular causal properties of the brain that give rise to semantics. But he does so without providing any understanding of exactly what those properties might be, how they might operate, or why only brains are capable of producing them. In other words, since we don’t have a clear idea of how humans are capable of semantic understanding, we cannot say that the Chinese room is incapable of semantic understanding. Since proposition 2 and conclusion 1 are almost identical, and proposition 2 is assumed to be true, the Chinese room begs the question. (Churchland 1990)

Searle responds by claiming the proposition to be a logical truth and asks us to embark on yet another alteration to the thought experiment. He has us imagine the man gets bored and decides to interpret the symbols as moves in a chess game. Then he asks us whether the room is giving off a Chinese semantics or a chess semantics. (Searle 1990)

I say that possible interpretations of the symbols are not at all relevant, what is relevant is the way in which the system uses those symbols to interact with the world. Is the system playing chess or communicating in Chinese? Again I think the argument has failed.

Conclusion

So we have seen that the argument can be applied to classicism more generally and that if the argument is successful it does undermine the position. However as outlined in the replies and Searle’s seemingly inadequate responses we can see that the argument does not succeed.

Churchland’s reply shows us that the argument is itself fallacious, one cannot assume with no evidence that the position of AI is false. Dennett’s reply shows us that we can use Searle’s pump to produce intuitions that are counter to those Searle would present us. The system and robot replies show us that this intuition pump quickly leads to a position where it has become extremely difficult to imagine ourselves as the man in the room. Ultimately it is clear that while it is entirely possible that programs can’t produce semantics, it is also entirely possible that they could and the argument fails to give us any clear reason to believe Searle. The argument has failed to be convincing.

 

Sources

Bermudez, J.L. 2014. Cognitive Science: An Introduction to the Science of the Mind, Chp. 6.

Block, N. 1990. “The Computer Model of the Mind” Osherson, D. and Smith, E. eds., Thinking: an Invitation to Cognitive Science, Vol 3, 248-253

Churchland, P.M & Churchland, P.S. 1990. “Could a Machine Think?” Scientific American 262: 32-37

Clark, A. 1989. Microcognition, Chp. 2.

Dennett, D. 1980, In Reply to Searle 1980. 428-430

Dennet, D. 1987. “Fast Thinking” The Intentional Stance, 323-337

Searle, J.R. 1980. Minds, Brains and Programs. The Behavioural and Brain Sciences 3: 417-457.

Searle, J.R. 1990. “Is the Mind’s Brain a Computer Program?” Scientific American 262: 26-31

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s