Philosophy: January 2010 Archives

Race Thought Experiment #5

| | Comments (0)

If God miraculously modified a chicken to make it lay walnuts instead of eggs, and those walnuts grew into what looked like normal walnut trees, would you think the offspring was a chicken?

Update: It occurs to me that the second question I asked is really a separate issue, so I'll save that for post #6.

Personal Identity intro

| | Comments (0)

This is the 53rd post in my Theories of Knowledge and Reality series, beginning a new subject: personal identity. (The last post on artificial intelligence finished off the Mind and Body topic.)

What are the criteria for what makes someone the person they are? A lot of changes we can go through leave us existing as the same people. We've changed, but we're still there as the people who have changed. Some things that can be done to us leave us no longer around, as much as we don't like to think about that. Most cases of both are uncontroversial. But that doesn't tell us what it is to be us, and philosophers raise lots of puzzles about what changes we might undergo without ceasing to be us.

Philosophers will answer these kinds of questions by talking about essential properties. An essential property is something necessary for a thing to be what it is. An essential property of a triangle is having three angles. If it somehow got a fourth angle, it would no longer be a triangle. The triangle would cease to exist. My having a beard is not an essential property. I can shave my beard, but I would still be around afterward, and I'd be exactly the same person I was beforehand. Which properties are essential is a matter of debate, however.

Some people, following Rene Descartes, hold that we have a part that's not physical at all but an immaterial mind or soul. This view, dualism, would say you need the same soul to be the same person. [Descartes' own view is that the immaterial mind is not just a part of him but is all of him. His body is just a place his mind occupies. Most dualists think rather that the mind and body are both parts of us.] It's not immediately clear with some changes (e.g. if you used a Star Trek transporter) if the same soul would be present afterward according to dualism.

Other people say you need the same body. If so, you die when your body dies, but you continue if your body is still alive, even if other things aren't present (e.g. a functioning brain). Some say you need the same brain. If so, you might end up with a new body if your brain gets moved to a new body.

Some might say you need psychological continuity, e.g. having a continuing set of beliefs, desires, hopes, fears, loves, character traits, and so on (which obviously get somewhat changed over time but only gradually and through a process where most of them continue). Some who hold this view have even suggested we could be converted to computer programs and survive that way. [John Searle questions this, as we've seen in his Chinese Room argument. Behavior as if you think isn't enough for genuine thinking.]

These same criteria come up in John Perry's A Dialogue on Personal Identity and Immortality. Is it possible to exist after you've died? Lots of people think so, so it must not be obviously absurd. Gretchen Weirob is about to die. She wants the slightest possibility that she'll continue to exist. What if someone in the future has all her characteristics at her death? No - it has to be her, not just someone exactly like her (e.g. an identical twin who somehow also had the same memories and exactly the same personality traits). That's the kind of identity we mean, not just exact similarity but really being Weirob herself. She can anticipate what she will do and look forward to it, because it's not someone else. It's never correct to anticipate doing something that someone else will do. That's just an imposter.

Consider the example of burning a Kleenex box. If you later say of something "this is the very same box of Kleenex", that seems absurd. Even if you reconstructed something exactly like it, it wouldn't be the same box. Consider the same with the Mona Lisa.

So how could I survive death? The next post will begin looking at the different personal identity views, how they answer that question, and the various objections to them.

Race Thought Experiment #3

| | Comments (6)

If God created the universe not with the slow development most of us believe to have happened but pretty much as it is now, with all the "memories" and seeming causes that give signs of the past, would the racial groups we now identify still count as races? Would they be the same groups (i.e. would the lines of demarcation for races be the same as what they now are)?

Torture and Absolutism

| | Comments (2)
I wanted to make one observation about John Mark Reynolds' recent posts on torture at Evangel. One of the things that has struck me over several years of considering this question from a Christian point of view is that arguments against torture are either (a) implausible and conflicting with actual biblical allowances and endorsements or (b) non-absolutist and allowing for some exceptions, even if the burden of proof and extremely strong cause for hesitation should always be present. (By absolutism, I mean the view that something is always wrong with no possible exceptions.)

For ease of reference, here are the posts:

One Bad Argument in Favor of Torture
Cicero not Nero!
On Pacifism and Torture
A Conservative and Pragmatic Argument Against Torture

Arguments Against Torture

Consider the image of God argument. This is the same reasoning used against killing, and yet the scriptures make it very clear that capital punishment is not just allowable but mandated by God, at least in a certain context. (I'll leave it open whether it should be used today. What matters for my point here is that God not just allowed it the way he allowed divocrce in the Mosaic law. He commanded it in the Torah, and Paul seems to affirm the use of the sword in carrying out justice in Romans 13, so there's not even a plausible argument that the new covenant removes this allowance.) So I don't think the fact that we're made in the image of God is going to rule out all torture, since it doesn't rule out all killing and it's the explicit biblical reason not to kill people.

Aristotelian virtue arguments point out how bad it is to become the sort of person who could bring yourself to torture someone. Of course this is right, but it's also bad to become the sort of person who could bring yourself to kill someone. The argument that we ought to find the right mean, that we ought to be moderate, does not imply that we won't sometimes do something that is usually on one of the extremes. Aristotle, for example, saw honesty and truth-telling as virtues between the extremes of lying and betraying confidences. But there might be some occasions when lying to save someone's life is morally necessary (as God instructed Samuel to do when he anointed David) or betraying a confidence is morally necessary (as happens in courts of law all the time in the pursuit of justice, with the only significant exceptions being attorney-client, spousal privilege, and medical/psychological practitioner/client relationships). Just because the mean is the best spot doesn't mean the actions usually on the extremes always will be. Occasionally you'll find actions that are usually on an extreme ending up as the mean. So Aristotelian golden mean arguments will never rule out an action in principle, since that's not how the view works.

The coercion argument strikes me as mistaken, also. There are certainly occasions when it's right to coerce someone. For example, we put criminals in prison. We threaten to imprison or fine people to get them to testify or to serve on a jury. We impose severe penalties to those who won't pay their taxes. We have on occasion drafted people to serve in the military and kill other people, and when that's a just and popular war most people don't think it's as problematic as in wars that are very unpopular or obviously unjust. We require people to work or show progress toward improving employment capability if they're to receive government benefits of various sorts. It strikes me that the case of torture is most analogous to other kinds of coercion to get testimony, and the major difference is in the method of coercion, not in the principle that it's wrong to coerce people to tell the truth.

Race Thought Experiment #2

| | Comments (5)

If someone appeared out of nowhere who was an exact duplicate of Chris Rock, would he be black? Would he be a memer of the same race as Chris Rock? Why or why not?

Would you say the same if it was a duplicate of Britney Spears? Would her duplicate be white? Why or why not?

Would a duplicate of Dwayne Johnson have the same racial status (whatever you think that is) as Dwayne Johnson? Why or why not?

If you answer any of these questions differently, what makes the difference between the different cases and why would that be?

Artificial Intelligence

| | Comments (2)

This is the 52nd post in my Theories of Knowledge and Reality series. In my last post, I looked at Frank Jackson's argument for property dualism, concluding the major arguments involving dualism and materialism about the human mind. This last mind-related post covers artificial intelligence, particularly whether a computer program could be enough to generate genuine thinking.

The strongest argument that a computer might think is an argument from analogy. At least some examples of artificial intelligence in science fiction seem to do the same things we do when we think. In Star Trek: the Next Generation, Lt. Commander Data is an android who certainly seems to be a conscious, thinking being. In one episode, Starfleet conducted a trial to determine whether he was the property of Starfleet or whether he has rights enough to refuse to be dismantled for research into artificial intelligence. The argument that won the day is that, though we can't prove Data to have a mind, we also can't prove anyone besides ourselves to be consciously aware. They do the things we do when we're consciously aware, and they have similar brain states according to our best science, but is it absolute proof? So it doesn't seem like Data is in much worse shape than any normal human being, right? We should at least give him the benefit of the doubt when it comes to moral issues.

Nevertheless, John Searle's Chinese Room example is designed to show that a computer program that appears to think isn't thereby thinking. You might be able to design a program that follows steps to appear to think, but that doesn't mean it really understands anything.

Put a man in a room and give him instructions about what to do when he sees certain symbols. He is to follow the instructions and write down different symbols as a result. Little does he know that the symbols he receives are actual Chinese questions, and he's giving back actual Chinese answers. From the outside, someone might think someone inside understands Chinese and is answering the questions, but it's all based on rules. This is exactly what people trying to develop artificial intelligence are trying to accomplish. Searle says it's a misguided goal if people think this is genuine thinking and understanding, since the Chinese Room case shows that no one is thinking, even though all the behavioral responses to stimuli indicate that someone must be thinking. This is a problem for functionalism, since all the functional roles are present. It's a problem for the Artificial Intelligence project, since something like this could be developed, but Searle insists that merely accomplishing it doesn't give us anything that thinks.

Searle considers some objections. The Systems Reply admits that the man in the room doesn't understand, and the room itself certainly doesn't, nor does the instruction book. But the whole system - the man, the room, and the instruction book - does understand Chinese. Searle responds that he can make the system disappear by having the man memorize all the rules. The room does little work here. Now give the man Chinese questions, and he can write the proper answers. He acts as if he understands, yet there's no way he does - and he is the system. So the system doesn't understand.

The Robot Reply says what's missing is involvement in the world. Just language isn't enough. Make it interact with the world in more ways by putting the program in a robot that can talk, move around, play catch, etc. In that case, it seems more as if it thinks. A computer doesn't get to hold things in its hand and move around. It has no contact with the things its statements are about. Some have thought that giving it contact with those things would make it easier to see it as understanding what the statements are about.

Searle gives two problems for this reply. First, it concedes a bit much for the artificial intelligence thesis to be true anymore. Thinking is no longer about symbol manipulation but has to be caused in the right way and lead to certain kinds of behavior. It's not all based on just getting the right program. Second, simply getting a machine to move around and interact with the world doesn't make it think. Put a person inside the robot and give her instructions as with the man in the Chinese Room, and you would get the same result - it doesn't seem as if thinking is going on here.

There's an even easier reply that Searle could make (but doesn't). He can go back to his example of the person inside the room becoming the system. This is a person who moves around and can interact with things. This person can even know that these statements are about these things somehow. But that doesn't require the person to know which words mean what. I can know a Chinese statement is about the apple in my hand without knowing what the words mean. So interacting with the world can't be enough for me to understand what my statements are about.

The Brain Simulator Reply says that what's missing in the Chinese Room case is that it's not based enough on the actual human brain. Base it more directly on how neurons cause neurons to do things and such, and then maybe you'd be more inclined to call it genuine thinking. First of all, Searle points out that the whole idea was to get the right program, and then it thinks, regardless of the actual structure of the thing doing the thinking. It no longer fits with this if you model it directly on the human brain. It's no longer discovering the right program but is now just duplicating aspects of human brains. Second, you can do something like this with a person inside. Instead of manipulating symbols in a room, imagine that he has a complex system of water pipes, and he manipulates levers and nozzles so that water moves through pipes the way electrical signals do in the brain. Model it on the human brain. There's still no understanding.

Finally, the Combination Reply says to take aspects of the previous examples and combine them into one, so this would be an interactive robot with a computerized brain modeled directly on a human brain with behavior indistinguishable from human behavior, and then we'd be more inclined to think the whole system thinks. Searle admits that we might find that claim to be more reasonable. The problem is that now we're as far away from stumbling on the right program as we could get. We haven't discovered a program but have simply made something very close to a human. When we say apes think and feel, it's because we can't find any other explanation of their behavior, and they have a similar enough brain to ours. If we say that about this robot, it's for the same reasons. If we discover that it's just a program, we'd be inclined to say there's no thinking going on.

Searle insists that human thinking is based on the human brain, and our minds are just our brains. He resists the idea that thinking might occur apart from actual human brains. Any thinking must be based in something very close to the human brain. Consider the seeming possibility of a human body acting purely according to physical laws but not actually experiencing anything (philosopher David Chalmers has coined a technical term in philosophy for such a being: he calls them Zombies). Or consider someone who experiences things differently from most people even while having the same brain state (philosophers call such people Mutants, again coining a technical term that doesn't coincide with normal usage). Zombies and Mutants, in these senses, are impossible, according to Searle. Something had better be close enough to the normal human brain, or it doesn't have pain, boredom, or thoughts such as the thought that 2 + 2 = 4. Searle just has to deny that Mutants are possible, something David Lewis, who started out with a view similar to Searle's, didn't want to insist on, since there's no real argument for it. Zombies also couldn't exist, since something with a human brain automatically thinks. Maybe this is right (many materialists besides Searle think so, e.g. Simon Blackburn), but it's hard to prove such a thing.

If some really smart aliens contaminated the world's water supply with some powerful transformative agent so that within three months everyone would come to look just like Chris Rock, would there be any races left (or maybe just one)? Would it still make sense to say that I'm white? Would I be black?

How should you change any of your answers if everyone was made to look like Britney Spears? Dwayne Johnson?

Contact

    The Parablemen are: , , and .

    Twitter: @TheParableMan

Archives

Archives

Fiction I've Finished Recently

Non-Fiction I've Finished Recently