Science Fiction: January 2010 Archives

Artificial Intelligence

| | Comments (2)

This is the 52nd post in my Theories of Knowledge and Reality series. In my last post, I looked at Frank Jackson's argument for property dualism, concluding the major arguments involving dualism and materialism about the human mind. This last mind-related post covers artificial intelligence, particularly whether a computer program could be enough to generate genuine thinking.

The strongest argument that a computer might think is an argument from analogy. At least some examples of artificial intelligence in science fiction seem to do the same things we do when we think. In Star Trek: the Next Generation, Lt. Commander Data is an android who certainly seems to be a conscious, thinking being. In one episode, Starfleet conducted a trial to determine whether he was the property of Starfleet or whether he has rights enough to refuse to be dismantled for research into artificial intelligence. The argument that won the day is that, though we can't prove Data to have a mind, we also can't prove anyone besides ourselves to be consciously aware. They do the things we do when we're consciously aware, and they have similar brain states according to our best science, but is it absolute proof? So it doesn't seem like Data is in much worse shape than any normal human being, right? We should at least give him the benefit of the doubt when it comes to moral issues.

Nevertheless, John Searle's Chinese Room example is designed to show that a computer program that appears to think isn't thereby thinking. You might be able to design a program that follows steps to appear to think, but that doesn't mean it really understands anything.

Put a man in a room and give him instructions about what to do when he sees certain symbols. He is to follow the instructions and write down different symbols as a result. Little does he know that the symbols he receives are actual Chinese questions, and he's giving back actual Chinese answers. From the outside, someone might think someone inside understands Chinese and is answering the questions, but it's all based on rules. This is exactly what people trying to develop artificial intelligence are trying to accomplish. Searle says it's a misguided goal if people think this is genuine thinking and understanding, since the Chinese Room case shows that no one is thinking, even though all the behavioral responses to stimuli indicate that someone must be thinking. This is a problem for functionalism, since all the functional roles are present. It's a problem for the Artificial Intelligence project, since something like this could be developed, but Searle insists that merely accomplishing it doesn't give us anything that thinks.

Searle considers some objections. The Systems Reply admits that the man in the room doesn't understand, and the room itself certainly doesn't, nor does the instruction book. But the whole system - the man, the room, and the instruction book - does understand Chinese. Searle responds that he can make the system disappear by having the man memorize all the rules. The room does little work here. Now give the man Chinese questions, and he can write the proper answers. He acts as if he understands, yet there's no way he does - and he is the system. So the system doesn't understand.

The Robot Reply says what's missing is involvement in the world. Just language isn't enough. Make it interact with the world in more ways by putting the program in a robot that can talk, move around, play catch, etc. In that case, it seems more as if it thinks. A computer doesn't get to hold things in its hand and move around. It has no contact with the things its statements are about. Some have thought that giving it contact with those things would make it easier to see it as understanding what the statements are about.

Searle gives two problems for this reply. First, it concedes a bit much for the artificial intelligence thesis to be true anymore. Thinking is no longer about symbol manipulation but has to be caused in the right way and lead to certain kinds of behavior. It's not all based on just getting the right program. Second, simply getting a machine to move around and interact with the world doesn't make it think. Put a person inside the robot and give her instructions as with the man in the Chinese Room, and you would get the same result - it doesn't seem as if thinking is going on here.

There's an even easier reply that Searle could make (but doesn't). He can go back to his example of the person inside the room becoming the system. This is a person who moves around and can interact with things. This person can even know that these statements are about these things somehow. But that doesn't require the person to know which words mean what. I can know a Chinese statement is about the apple in my hand without knowing what the words mean. So interacting with the world can't be enough for me to understand what my statements are about.

The Brain Simulator Reply says that what's missing in the Chinese Room case is that it's not based enough on the actual human brain. Base it more directly on how neurons cause neurons to do things and such, and then maybe you'd be more inclined to call it genuine thinking. First of all, Searle points out that the whole idea was to get the right program, and then it thinks, regardless of the actual structure of the thing doing the thinking. It no longer fits with this if you model it directly on the human brain. It's no longer discovering the right program but is now just duplicating aspects of human brains. Second, you can do something like this with a person inside. Instead of manipulating symbols in a room, imagine that he has a complex system of water pipes, and he manipulates levers and nozzles so that water moves through pipes the way electrical signals do in the brain. Model it on the human brain. There's still no understanding.

Finally, the Combination Reply says to take aspects of the previous examples and combine them into one, so this would be an interactive robot with a computerized brain modeled directly on a human brain with behavior indistinguishable from human behavior, and then we'd be more inclined to think the whole system thinks. Searle admits that we might find that claim to be more reasonable. The problem is that now we're as far away from stumbling on the right program as we could get. We haven't discovered a program but have simply made something very close to a human. When we say apes think and feel, it's because we can't find any other explanation of their behavior, and they have a similar enough brain to ours. If we say that about this robot, it's for the same reasons. If we discover that it's just a program, we'd be inclined to say there's no thinking going on.

Searle insists that human thinking is based on the human brain, and our minds are just our brains. He resists the idea that thinking might occur apart from actual human brains. Any thinking must be based in something very close to the human brain. Consider the seeming possibility of a human body acting purely according to physical laws but not actually experiencing anything (philosopher David Chalmers has coined a technical term in philosophy for such a being: he calls them Zombies). Or consider someone who experiences things differently from most people even while having the same brain state (philosophers call such people Mutants, again coining a technical term that doesn't coincide with normal usage). Zombies and Mutants, in these senses, are impossible, according to Searle. Something had better be close enough to the normal human brain, or it doesn't have pain, boredom, or thoughts such as the thought that 2 + 2 = 4. Searle just has to deny that Mutants are possible, something David Lewis, who started out with a view similar to Searle's, didn't want to insist on, since there's no real argument for it. Zombies also couldn't exist, since something with a human brain automatically thinks. Maybe this is right (many materialists besides Searle think so, e.g. Simon Blackburn), but it's hard to prove such a thing.

Contact

    The Parablemen are: , , and .

    Twitter: @TheParableMan

Archives

Archives

Fiction I've Finished Recently

Non-Fiction I've Finished Recently