what does a sperm feel?

But feelings empirically map to certain processes in the brain. We can show that when a person feels pain, an area of their brain is more active than when they are not, and so on with other stimuli and feelings. So subjective feelings clearly have an empirical component to them.

I would also like to point out a central problem with solipsism: Once you assume it, the entire exercise you’re participating in degenerates to masturbation. Which, of course, makes your sperm feel sad.

That’s what I was getting at - consciousness/awareness gives a survival advantage to the creature. I think it pretty much follows that a creature that behaves exactly the same given the same set of inputs, would be conscious/aware also.

Huh? Why? People here at the SDMB who don’t believe free will exists (the controversial free will), I think would generally agree with your statement about consciousness and a survival advantage.

Define “feel”.

What are you talking about when you say “subjective witness”? Because by the definition of the words, anything that does a comparison of external stimuli must do it subjectively, and must witness what it’s comparing to do it.

If you want to say “consciousness”, just do so. Don’t screw around with meaningless terms of art.

Why would it be happier from being warm on one side? what a peculiar example.

Here’s a better one: you have a robot that has multiple objectives, and seeks to satisfy them all at once to the best of its ability. Say that it wants to be warm, it wants to be standing on a white surface, and it wants to have a bright light standing on it.

Now suppose there are five locations:
1 cold, black, dark (cave)
2 hot, black, dark (dark room)
2 warm, black, bright (parking lot)
3 cold, white, bright (freezer)
4 hot, white, bright (lit room)

Now, clearly, given the choice, the robot will head straight for the lit room and stay there, because that place satisfies all of its objectives. But, what does it do if it can’t get into the lit room? We know it will avoid the cave, because that satisfies none of its objectives. But what about the dark room, the freezer and the parking lot? None of these satisfy all the objectives, and the dark room only satisfies one of them! So, which will the robot gravitate to?

At this point, it comes down to the robot’s ability to compare things that are not inherently comparable. Which is to say, it’s not comparing “get everthing I want” with “don’t get nothing I want”, which is a strictly quantitative comparison; it has to decide which it wants more (a qualitative comparison). To do this it has to reduce the inputs into some abstractly comparable measure. The robot would then compare, or in this case probably add up all the normalized measures, in order to ascertain which location is “best”.

At this point it becomes difficult to speak of the situation without anthropomorphising the robot - because the robot is going to weight the metrics by which ones it values more; which ones it “likes best”. So, a robot which cared about the same about all the factors would be equally happy with the parking lot and the freezer, and may even choose between them randomly, whereas one with a slight preference for white or heat (a slightly higher scale factor on the weight given to the normalized value for that input’s state) will chose one or the other difinitively. And, it might even be possible for the robot to weight heat so highly, and brightness so low, that it chooses the dark room despite it catering to only one of the three objectives.

For even more fun, you can have the robot experiencing diminishing utility or increasing need - having the robot’s “preferences” shift over time such that it “gets tired of” the area it’s in, to the point that it “likes” another area better. Such a robot would move from one area to the other, as it grew tired of the area it was in, which would cause it’s level of “satisfaction” (the summed weights of its satisfied preferences) to shift until it decides it would be “happier” in one of the other locations.

This is the point when you protest, “But, but, the robot isn’t actually happy! These weighted measures can be all done mechanically without the robot being consciously aware of the process! It doesn’t have to have a subjectivized whitnessificator to do this!”

And to this I would reply, you’re precisely right; and in fact I strongly suspect that many insects such as houseflies and cockroaches are little more than organic wind-up toys, reacting mindlessly to the changing stumuli from the environment around them. But, I would counter, the only reason that this can work is because the robot’s stimuli and reactions are, basically, hardcoded. Its likes and dislikes and actions and reactions are all pre-programmed…which means that it would be unable to adapt to a changing environment. If it suddenly became the case that all robots entering the lit room would be crunched by mallets, robots would stream endlessly to their deaths. They would never learn to avoid the room, because they would not have the ability to develop new aversions to things.

But, you protest, this is what evolution is for. Random mutation in successive generations of robots would possibly create some that were averse to rooms that were both hot, white, and bright - and in fat, that’s exactly how evolution and survival of the fittest is supposed to work.

To that I would say yes, but - evolution only changes things between generations. In a single generation, in a single animal, a hardcoded robot could never alter its behavior. For things with very short lifespans and large popultions it’s okay, they can adapt to threats in successive generations. But suppose you want your robot to change its mind now? To learn things?

To do this, it is necessary for the robot to be able to assess and alter its own preferences, it’s own calculation methods and weightings. And to do this, it needs to be able to make reasoned decisions about them - it has to know when it is “happy”, and be able to correlate that with changes in its environment. Only by doing that can it detect when conflicts between different preferences require an alteration in the weighting of various behaviors.

So, from this we learn that any entity which can learn, which can adapt to completely unexpected phenomena and changes in its environment, must both have and be aware of its own “emotions” at an abstract level. This isn’t just something that could happen, it must happen, if you want your critter to learn and adapt to new situations in the span of its lifetime. And I think this also qualifies as having a “subjective witness”…whatever that is.

And I think we can all agree that there is selection pressure in favor of being able to learn and react differently to new situations, to new threats? Not sufficiently strong a pressure to kill off the cockroaches, mind you (sometimes the mindless decider can stumble upon a mindless set of behavioral rules that never becomes obsolete), but a pretty strong evolutionary pressure nonetheless.

ETA: oh yeah - note that none of the above requires (libertarian) free will. I don’t believe in the stuff, myself.

ETA2: Oh, and: I think that sperm are at the level of wind-up-toys too. So they don’t “feel” anything, for certain specific values of “feel”. (They obviously react to their environent, but that may/may not be the kind of “feel” we’re talking about.)

Why have a subjective consciousness that can pick from various paths and options if there is no free will? What advantages does that provide? I have never understood that.

Unless subjective consciousness just opens the number of options we are pre-destined to pick anyway. In that case it would provide an advantage.

What is free will supposed to be free of? I have never understood that.

My will is entirely, 100% driven my my preferences, knowledge, awareness of my envoronment, mood, cognitive processess - that sort of thing. If you knew all of that, you could predict me perfectly; I’m not going to suddenly up and decide to like jumping off cliffs without working my way up to it via a process of determinable thought first. Of course, at the moment nobody does know all this stuff about my mind and mental state (putting aside theorised sky-gods), so to most people I’m pretty difficult to predict. But this doesn’t mean I’m unpredictable…or particularly have any desire to be.

And to think… all this wonderful debate came from such a tiny sperm…

The Miracle of Life indeed. :slight_smile:

If the physics of the universe means everything can be pre-determined with enough information, then the fate and outcome of the universe was sealed a long time ago.

The argument for subjective consciousness is that it allows you to pick from various paths so you can find the one that will best help you achieve your goals (which are largely lifted from evolution). How to obtain nutrition, healthy interpersonal and intrapersonal relationships, social status, etc.

So if there is no free will, what benefit is there of having a subjective experiencer who can pick from 10 different paths to find the one that will best help him/her achieve their goals? If everything is predestined, what advantage comes from the illusion of choice?

i agree with you that it is advantageous and probably occurred through evolution that a creature has a knowledge of how happy it is. but why couldn’t it store its happiness level as an integer, like in a computer, with no subjective experience of it?

consider the question “how happy are you right now, on a scale from -10000 to 10000?” if you think about your subjective experience of happiness, it really is as simple as this question. at any time, you can reply with a number to describe your subjective experience of happiness. being tortured? -7894. eating toast? 17. having sex? 3562. all that is required is whether the current collection of impulses is helpful for survival or harmful for survival. why couldn’t a creature that is very similar to a human (can learn and adapt, etc.), instead of feeling happy or unhappy, just store how-happy-it-would-feel-if-it-was-able-to-subjectively-witness in an integer? there’s no reason that a creature actually has to care. it can just store how much it would have cared in an integer

there’s no reason for you to believe that a human body needs to feel pleasure or pain, rather than just storing that value in an integer and reacting to that integer appropriately, just like a computer-robot, without having any pleasure, pain, or care of what it is doing

“i feel bad, what are the best responses for this?” achieves the same thing as “the collective-impulses-integer is reporting -7894, what are the best responses for this?”

I can? Really? What’s funny about this question is, from a purely subjective point of view, I cannot. I don’t even think I could quantify my happiness right now from one to ten. Even if you gave me very specific examples of how others have reported to feel at certain numbers, I could not. I absolutely lack a means to quantify my happiness in that fashion. That’s one of the key differences between emotions and computer algorithms—and being both a primate and a computer scientist, I have plenty of experience in both realms. There simply isn’t any mapping between complex emotions and pure numbers.

Confission:

This is only tangentially related to the discussion, but I have a few things I think you might be interested in. First off, the concept of a philosophical zombie, which is I think something you’ve been hitting on without maybe necessarily realizing it.

Also, the book Mindscanby Robert J. Sawyer, which is science fiction, but explores what it means to be human, and what it means to be conscious.

I’m not putting forward either of these as my own arguments, I just thought they might be interesting ideas for people to explore.

I will, however, point out that our conscious mind apparently plays much less of a role in our decision making than we think. A study was done which determined that our subconscious makes a decision a split second before we’re aware of it, and then our conscious mind comes into play and takes the credit for it. [Cite]

yes

What about innovation? If humans were just responding to stimuli based on some sort of instinctive “programming”, could we have ever invented tools? Without those, it seems that humanity would be kind of screwed, or more likely, we never would’ve evolved in our present form in the first place.

i’ve never heard of a philosophical zombie, and that link is very interesting. the philosophical zombie mentioned on that wiki page is exactly what i am describing. thanks for providing the link!

i’m going to look more into the arguments that have been made against the p-zombie arguments to see if any of them make sense to me. i am also interested in reading what chalmers has written about p-zombies challenging physicalism. and those who have disagreed with the arguments i have written would probably benefit from reading what chalmers has written on this, as i’m sure he has investigated these arguments far more than i have

I think you’re thinking about this from the wrong direction. Whose is the advantage and why would there need to be one? If everything is predestined, why talk about the advantage of the illusion of choice, as though there were any alternative to what we see right now?

Evolution isn’t a conscious process that deliberately seeks out the most informationally-dense way to proceed for any given organism in any given situation. It is dictated by the laws of physics: patterns that are able to replicate themselves do so and survive to future generations. We happen to occupy an evolutionary niche for creatures who apply a whole lot of extremely complicated computations to their sensory data before acting on it; the precise nature of those computations (which are our subjective experiences, feelings, desires, frustration, happiness, and every other human emotion) has been reinforced by natural selection over the years but is effectively arbitrary.

The illusion of choice happens to be part of our default mental framework, probably because it’s more useful than the alternative. :stuck_out_tongue:

I think it depends on what you mean by “instinctive programming.” AFAIK, “instinct” generally refers to specifically unlearned behaviors. It is our capacity for learning that enables us to develop tools (not to mention these nifty opposable thumbs). But then, our capacity for learning is just part of our programming as well, so yes, the ability to innovate is part of the machinery. Where else would it come from?

I’m not sure I understand the question. Your consciousness, that illusion of choice, is a pretty good evaluator of those ten different options, so it was selected for.

Talking about free will is always difficult, but I really don’t understand why you’d say this. Our “programming” is such that we can innovate.

i have no questions concerning free will. i am well aware that there is no free will.

back to the topic…

i have no reason to believe that a sperm does not have some sort of subjective experience… that a sperm does not actually experience the feeling of a goal… that a sperm does not experience some sort of happiness/unhappiness… or desire… pain/pleasure. it seems to make more sense to me that a sperm feels and has a subjective experience, than a sperm not having any subjective experience. the complex impulses/stimuli/responses that make up me could have happened without any subjective experience. but i feel subjective experiences despite that. therefore, a sperm probably feels subjective experiences despite that it is also only made up of impulses/stimuli/responses

perhaps an impulse has a subjective experience quality to it, so all impulses are felt by something. the sperm feels all of its impulses as a subjective experience. does the impulse have to be electrical? does it need to be light energy? or does a subjective experience also apply to gravity or anything that causes motion? for example, when a rock falls, does every particle of the rock want to move? do atoms have a property of subjective experience, or is subjective experience a property of energy?

I’m not sure why you’d think that. Much of this thread has been spent explaining that subjective experience is a property of a complex interconnected set of brain cells. There is zero reason to believe such a thing would exist where a complex set of neural connections (or at least any structure sufficiently complex to to compute the same functions) does not.

In other words, those complex responses are your subjective experiences. Why would you think they are anything else?

I agree that our capacity to learn and invent comes from the way that our brain is set up. I guess what I was trying to get at was that subjective experience and learning go hand in hand. If we were just responding to stimuli without consciousness, we’d be like insects, relying on pretty much just our body to survive.(Unless we were pre-programmed with instructions to make certain tools, but that’s probably a little extreme.:p) Given our current physical form, we probably wouldn’t last long, that is if we even evolved at all.

As far as simpler things having subjective experience, I would think that it’s pretty obvious that they don’t. Everything that we know of that has subjective experience also has some sort of complex information processing center, e.g. a brain. Not just any brain either, but a highly developed one. In humans, if that processing center is damaged in the right way, they are brain dead, and their subjuctive experience (as far as we can tell) is over. The next closest things we have are computers, but given their complexity, they still don’t come close to how complex the brain is and as a result, they are not conscious. Sperm cells and rocks have no such system that has been discovered.

I guess thats the appeal of philosophy though. If you’re any good at it, you can argue all sorts of things. Heck, Descartes had to write quite a bit just to prove that he really existed, and not just in some kind of dream world either.

At this point, I think it comes down to how you define the phrase “subjective experience.” You want to define it as any stimulus/response system, I (and most others here) think of it as the emergent property of a complex nervous system, and don’t count a transistor, or oxygen atom, or bacteria, or sperm, as having them.

It’s the same benefit that you get from a robot that can avoid rooms with mallets in them - your robots last longer and avoid damage and trauma along the way. This is precisely the sort of benefit that evolution selects for - regardless of whether it’s occuring in deterministic robots or deterministic people.

What makes you think there’s a difference? You feel pain, and you react. The robot gets zapped with a -7894, and it reacts. What’s the difference?

“subjective experience” you say, but what the heck is that? Necessarily and by definition, the robot’s reaction to -7894 is subjective, and it has to experience the -7894 in some way or it would be completely unaware of it and couldn’t react. So by definition, your toaster has subjective experiences because it notices when you push the toasting lever down.

If you’re talking about “why to we feel pain directly” - the robot is reacting directly to the -7894. The number is taken straight into what passes for it’s cognitive processes and the robot reacts to it, just like we do. (Well, unless it bypasses the robot’s cognition entirely, like the knee-hit reflex does in us.) And let’s note that the robot doesn’t just react to the number; “-7894” as a number is meaningless without a context. So, it’s actually -7894 units of pleasure, which the robot reacts to by avoiding it like the plague - just like we avoid massively unpleasant things.

Keep in mind also, that the genericly learning robot needs an objective way to measure and respond to stimulus, and it needs the ability to “consciously” review and react to its own mental state, so as to be able to modify its behaviors. This requires that the stimuli be something that the robot can “consciously” recognize that it has - because if it didn’t, it couldn’t react to it or learn from it. And it still needs to be an aversion that tends to effect the robot’s consciousness whether it deliberately thinks about it or not, because pain that you don’t notice unless you concentrate on it is not much of a deterrent. So, the aversion must force itself into the cognition’s notice, and force the consciousness to strongly consider being deterred by it. This is the pain you can’t ignore, folks!

“But the robot’s not experiencing the -7894 - not like we do!” My response to this? Canadians don’t experience pain either. Sure, they act like they feel pain, and will report that they do if asked, but they don’t really feel pain. 'Cause I said so!