Survey for study on relation between personality traits and philosophical views:

The one about the 7 people and only 6 tickets bugged me. These kinds of thing happen all the time and the option that is most likely isn’t there. Chances are, more than one person will opt for the bar. I cannot speak for all male groups, but in groups primarily female a few things would be different:

  1. They would have called ahead and found out how many tix were left or reserved 7.

  2. If for some strange reason they did not do that, 2-3 of the women (or a couple if this is a co-ed group) would volunteer to go to the bar.

  3. If the group had to be split that way, there would be some kind of after the game gathering, so that all involved could share and touch base again.

OR

The lack of planning and the shorting of tix would lead to a huge argument, hurt feelings all 'round and some of the group would depart.

OR

A scalper is found and the group ponies up to cover the cost of the remaining ticket.

OR

The group says screw this, and goes to a movie…
So, the options given were (IMO) incomplete. In human behavior, you cannot parse it down to 3-4 options: the choices are more complex than that. That is what drives me nuts about these kinds of surveys. There are always more choices that can be imagined.
Something tells me I would have sucked in philosophy classes…
I did like the one about the switching of brains because that referenced what makes us human. Fascinating stuff, there.

“*4. Suppose neurologists are able to identify every part and every connection in the human brain. Working with a team of computer scientists, they then build a robot that has a complete electronic replica of the human brain. Could this robot experience love?”

I was a bit torn on this one due to an ambiguity in the question. Learning occurs through changes in the brain, the strengthening and weakening of connections, excitation, inhibition, etc etc. These changes are changes to the “hardware” so to speak. I put “no” for this question because, even if you could build a robotic brain, you probably couldn’t build a robotic brain that could learn the way an organic brain can.

Do you know about artificial neural networks?

Quite familiar, but it’s a hardware/software issue. you can certainly build a computer software network that can “learn,” but that’s not the same as a robotic brain modeled off a human. The question was somewhat unclear what it meant.

What if, before you looked at your phone, you realised that it was another person’s phone ringing, not yours. But then later you looked at your phone, and realised that it rang at the same time you had the thought that your own phone was ringing. If I asked afterwards you whether you knew at the time that your phone was ringing, would you then say yes?

I know some philosophers think I shouldn’t. But fuck it, I’m flipping that bitch. Saving 4 lives while they go about their business feeling righteous.

I solved it by sending myself to the bar.

Well, in that case, my belief that my phone was ringing would have been falsified; the case in the hypothetical, though, is slightly different in that it’s the belief that I’m being called that is verified in the end, or at least that’s how it seems to me.

Your belief that your phone was ringing is falsified both in the original example and in mr. jp’s follow up question. Your phone doesn’t ring in either scenario.

I think mr. jp meant to ask “If asked later whether you had known someone was calling you at the time, would you say yes?”

More about my own reason for answering no to the original question: I think that a necessary (but not sufficient) condition on knowing that X is believing that X because X. So in other words, in order to know it’s raining, I must believe it’s raining, and I must have this belief because it’s raining. In cases like that given in the scenario (and in Gettier’s original paper inspiring these kinds of scenarios) the belief in question is not had because its object is true. In the scenario in the quiz, you don’t have the belief “Someone is calling me” because someone is calling you. So to my mind, your belief is not knowledge.

That doesn’t solve much regarding the puzzle being gotten at by examples like this, though. For you can construct cases where for some X you form a belief that X, and you have the belief because X is true, but there’s still something suspiciously non-knowledgelike about your belief. So for example, (off the cuff example, certainly unnecessarily complicated) maybe you’re in a room with a single window, except it’s not a window but instead a very high definition computer monitor. The monitor makes it look for all the world like it’s raining outside, so you have the belief that it’s raining outside. And in fact the monitor is accurately portraying the weather, so you have the belief because it’s raining. But the evil lab monitor in charge of the evil experiment being done on you was supposed to have flipped a switch that makes the monitor misrepresent the weather, and usually he does flip this switch. This morning, he simply forgot. So you only have the true belief that it’s raining (many would argue) “by accident” as it were.

-Kris

Was that a big 5 personality test in the middle?

It’d be cool if the end of the test had your score on that.

Hell, I’ll join you because all these fine points and splitting hairs etc is making my head hurt.

I answered “slightly agree” to the question, “do you lie?” or whatever, thus putting ALL my answers in doubt.
Come to think of it, I can’t remember now if that question was yes/no/prefer not to answer. Anyhoo, I said yes, I lie.
Whatcha drinking? Shall we sing the Monty Python philosophy drinking song?

I didn’t think any of the philosophical questions were difficult. If you don’t cloud you’re thinking with any metaphysical bullshit, the answers are obvious.

1. Suppose scientists figure out the exact state of the universe during the big bang, and figure out all the laws of physics as well. They put this information into a computer, and the computer perfectly predicts everything that has ever happened. In other words, they prove that everything that happens, has to happen exactly that way because of the laws of physics and everything that’s come before. In this case, is a person free to choose whether or not to murder someone?

No. If everything is deterministic, then all perceived “decisions” are just illusions. Your brain chemistry is making the decisons. You have nothing to do with it.

Of course, the concept of libertarian free will is still logical nonsense even if everything is NOT deterministic. If it’s random, it’s still not autonomous. Trying to identify a mechanism for automous will always leads to an infinite regression.

2. Suppose you drive to the local baseball stadium with some friends, and try to buy tickets at the door. There are 7 of you, but there are only 6 tickets left. You can either drive everyone to a nearby bar, which will be a lot less fun than being at the game, or 6 of you can go in, and 1 of you can take the bus home and miss the game entirely. Is it most fair for everyone to go to the bar?

Don’t fuck your bro, man. Go to the bar. You’ll still have a good time, I promise.

3. Suppose a mad scientist takes out your brain, and puts it in your best friend’s head. During the same operation, the scientist takes out your friend’s brain, and puts it in your head. Now your body has your friend’s brain, and your friend’s body has your brain. Your heroic mother storms into the room to save you, but not your friend, who she believes got you into this mess. Is the person with your body still you, her son?

This is a bit of a semantic question, but I said no. The person is the personality. The personality is the brain.
4. Suppose neurologists are able to identify every part and every connection in the human brain. Working with a team of computer scientists, they then build a robot that has a complete electronic replica of the human brain. Could this robot experience love?

Assuming all the chemistry can be duplicated exactly, the answer is yes. There’s nothing non-material about emotional states.

5. Suppose that all you know about Einstein is that he developed the Theory of Relativity. But suppose it turns out that Einstein actually stole the idea from some guy named Moynahan, who nobody has ever heard of. In this case, when you use the name “Einstein,” are you actually referring to Moynahan?

No. It just means you don’t know something you thought you knew about Einstein.

6. Suppose you hear the sound of your cell phone, so you reach in your pocket and answer the call. Your landlord is on the line, but you realize later that your ringer was off, and the sound you heard was actually someone else’s phone. When you heard that other person’s phone ring and mistook it as your own, did you actually know someone was calling you?
No. I think this one’s pretty self-evident.

7. Suppose you meet a man from the future who knows everything there is to know about science. He tells you that he doesn’t like apples, and says that though he has never eaten one, he has figured out what apples taste like just by studying the relevant science. Could he know what apples taste like without ever having eaten one?

Yes, assuming a perfectly detailed understanding of taste sensations. This is the only question I wasn’t instantly sure of, though.

8. Suppose scientists are able to use stem cells to grow lungs that breathe without being connected to a body. They then grow a heart that pumps without being connected to a body. If they can do all this, can they create a brain that thinks without being connected to a body?
I answered yes to this one, but it isn’t really constructed that well. There is no way to necessarily infer a conclusion one way or the other from the predicates, but I chose to interpret the question as asking whether there is anything innately more impossible about growing a brain than growing any other organ. I said there was not

9. Suppose a runaway train is coming down a track, and is certain to kill five workmen who can’t get out of the way. You’re standing next to the controls and can switch the train to the other track, but if you flip the switch, one man working on that track is sure to die. Should you flip the switch?

Given no other knowledge, you do what will save the most lives. This hypothetical can get more complicated, though, if you start applying specific identities to the potential victims. What if the one is a baby? What if it’s YOUR baby? What if the five guys are Hitler, Stalin, Pol Pot, Saddam Hussein and Osama bin Laden?

These kinds of hypotheticals are always annoying. You try to do the least harm you can with the data you have available.

I looked at it like this. Even though the computer knew your choice you were still free to make the choice. If someone was really really agonizingly thirsty and you offered them a glass of water would they take it? Would they be free not to if they so choose? Any meaningful practical definition of free will I think pretty much amounts to “ability to recognize and choose the perceived better of available options”. Meaning you figure out what your options are and pick what appears to be the best of them.

I don’t see how foreknowledge of what you’d consider the better of available options removes your ability to process and decide.

Possibly philosophy has a different definition of free will though. Under a different definition my answer prolly wouldn’t do so well.

Foreknowledge has nothing to do with it. It’s about what causes the “will.” The will itself has to be either determined (meaning inexorably caused by external influences, not “planned” or known), or random. In neither of those cases can it be autonomous. The will can’t determine itself.

Was it supposed to be in reverse order from what the intro said it’d be?

  1. I said “no.” If we accept the premises of the question – that the computer can predict all events that have taken place up until now, including those done by humans – then nobody can choose to do anything. I think the premise is silly, but that’s okay.

  2. I thought it was silly that these were the only choices. In most of my peer groups, someone would say, “Aw, the hell with it. Have a good time.” In the end, I picked the bar as being the fairest option, although I don’t think it’s the best outcome, or even the best probable outcome.

  3. If you have to answer this as a yes or no question, it’s closer to ‘no’ for me.

  4. I had huge trouble framing this one. I said ‘no.’

  5. The question is contradictory, because if you know something, it means it’s true. Otherwise we speak of “thinking we know” something. Besides, it’s silly to speak of “all we know” about Einstein being that he developed the theory of relativity. If you can even use the pronoun, you know he’s an individual, male human being, for example; that he is named Einstein; that his name has eight letters and begins with a vowel; and so forth. I’m not trying to be funny; the question’s ill-constructed. Point being, you’re not just using “Einstein” to mean “whichever person developed the theory of relativity”; you’re using it to mean “a specific human being, who (as far as I know) developed the theory of relativity.” That referent doesn’t change if that subordinate clause is invalidated.

  6. I can’t even figure out how this is controversial.

  7. Unless “knowing everything there is to know about science” includes the subjective experiences of everybody, no. Christ, different varieties of apples don’t even taste the same.

  8. No, because in order to think you need to interact with the outside world.

  9. Flip it, as discussed by others.

I’m not sure this one is so clear cut. Well, it depends on what the meaning of “know what apples taste like” means. He may have an idea as to what an apple tastes like in comparison to other things that he has tasted, and may know and understand the description of the taste. However, I do not think knowledge by proxy is the same as knowledge by direct experience - I still think the analogy to colors that Frylock brought up is still reasonable. If someone has seen every shade of color except green, would they know what the color green is by interpolation/inference? EDIT: unless “studying the relevant science” includes tasting something that tastes exactly like an apple, but isn’t an apple, but that would be cheating :wink:

Also for #2, does the magnitude of the difference in situations change the “correct” answer? Suppose it were worded that either 6 people live and 7 people die, or all 7 people die. Is it still most fair if all 7 people die? What if the difference is 6 people sleeping inside in warm beds and one person sleeping outside in the cold vs. all 7 sleeping outside?

Yeah,this. I have a bachelors in philosophy, and no good option for that.

I can’t imagine anyone caring about any of these questions or answers (other than the one where we get to save four workers from being hit by a train). I simply can’t understand the interest in philosophy. I have tried. I took two college philosophy courses, and was completely uninterested in the whole concept. What does that say about my personality? Can a robot love? who cares? Let’s just wait until we invent one and ask it.

Regarding the runaway train question, it doesn’t seem right to flip the switch and kill the one worker who actually attended that morning’s safety briefing.