To what extent is conscience an illusion?

Huh?!??
Not following you at all here.

Conscience is that state of understanding whereby you realize that your own selfish best interests, at least in the long run, converge with those of other people in such a way that you would do yourself ill to do harm to others, or even through inaction to allow harm to come to them that you could prevent. We tend to internalize it as felt compassion or empathy, but it is what we really want, that the outcome of our actions be of benefit to others, because we like the results.

What do you mean by “gives transgressors the impresison that they could have chosen to act differently”? How does it benefit this person, “the transgressor”, to understand what his or her options were, and how does that in any shape way fashion or form differ from anyone else’s understanding of their options? What is the purpose of labeling someone “transgressor”? Does it mean anything other than “someone who chose other than I believe I would have chosen in that same situation”? If not, what the fuck is a “transgressor”?

What do you mean by “makes everybody believe they are able to opt for right and abjure wrong”? Do you mean to imply that people are NOT able to opt for whatever options they choose to opt for, which presumably would be the ones they deem right? Are you denying the presence of choice? Are you psychoanalyzing people in the aggregate and specifying the structural function of ‘conscience’ in terms of what it collectively causes us to believe about ourselves or something?

Where does consciousness go when you die? It’s an illusion.

I.

What part of a foot-long plank of plywood is found at the 15" mark?

Consciousness doesn’t “go somewhere” when you die, it simply does not exist at the coordinates that lie beyond the timeframe of life.
II.

Consciousness can’t be an illusion. In order for anything to be an illusion, it has to be an illusion to something. My computer does not have the illusion that it is creating this post, or that it is the creator of a folderfull of Excel documents; indeed, it entertains no illusions at all, insofar as it lacks a consciousness. (Future computers may be conscious, but I’m pretty confident that this one isn’t). Your apparent consciousness could conceivably be an illusion to me, just as my computer’s “consciousness” might be an illusion to me — I can’t prove you’re really conscious. But my consciousness can’t be an illusion to me, insofar as the “me” to whom it would have to be an illusion would have to be conscious in order to perceive the illusion.

What do you think the word “illusion” means?

Not at all. Consciousness is the observable manifestation of the survival instinct. We want to remain conscious, which means we want to continue to live. When we die, there is no longer a mechanism by which consciousness can exist, nor a reason for it to. Death ends it (at least in the way that we understand it), persistence thereafter is just wishful thinking.

Which kind of vaguely relates to this thread. The extension of consciousness beyond physical existence is nonsensical, by the myth allows for serious distortions of conscience. My actions are based on my survival and comfort, in concert with my social group, in terms that make sense here and now. Add in that I should be concerned about events beyond my or someone else’s survival can lead to severely muddling the social algorithm. If my physical survival loses significance because I “know” that I will continue to exist after death, the natural limits on my behavior can be loosened to intolerable extremes.

It sounds non-falsifiable to me - if consciousness weren’t an illusion, what would it look like?

It is a category error. If consciousness is an illusion, then we cannot be sure it is real. Nor can we determine if anything is real, since we experience everything thru consciousness - including illusion. There is therefore no distinction between reality and illusion, and saying “consciousness is an illusion” is a meaningless statement.

Regards,
Shodan

It looks like the observable stimulus/response pattern exhibited by creatures. Behavior itself is the evidence of consciousness. But this thread is not about consciousness, it is about conscience: the complex calculation related to the social environment, how individuals make decisions based on perceived effects WRT the local community of beings, and how an individual responds to the greater effects of their decisions.

In my mind (heh), the best argument that consciousness is real is exactly how the OP phrased it - is it an illusion? No. Because of what *illusion *means.

An illusion requires a perceiver. Unlike the ‘tree falls in a forest and nobody’s there’ trope, an illusion *definitely *only exists if there’s somebody there to get it wrong. That picture of an old woman/pretty young lady? That’s not an illusion until somebody’s looking at it. Lines that are the same length but look different aren’t illusions unless and until somebody thinks the lines look different lengths.

So the very question of whether consciousness is an illusion implies there’s somebody there to fool. You can’t trick a computer into thinking it’s conscious. It would never ask the question, and never get it wrong. Whenever a parsimonious scientist claims consciousness is illusory, s/he also is implying there’s something there to fool. What that thing is must be some scintilla of consciousness.
OTOH, conscience is a construct. It’s whatever society agrees upon as ‘good behavior,’ together with an individual’s desire to behave that way because it’s the right thing to do. It exists in most people, inasmuch as people are willing to sacrifice a bit in order for the good of the larger group. Some people don’t have it - sociopaths.

Conscience is thus a product of evolution, just like religiosity (cf.).

Green is the way humans perceive light with a predominant wavelength of roughly 495–570 nm.

Moral discriminations vary in time and space.

May I have a source for this definition of conscience. I would like to see it in a more rigorous formulation so that I can really understand it.

It is a personal shortcoming of mine that I should find it hard to extract clear ideas from such dithyrambs. I would rather answer questions addressed in a terse, exact form.

:dubious:
Aah, here we go…

Interesting. Sir, please elaborate on the concept of “transgressor”, preferably without reference to external authority, if you would? That is to say, how a “transgressor” differs from any other person who selects from options, or (alternatively) from any other person who (for some as-of-yet unestablished reason) cannot do so.

For an evolutionary right answer, an animal might save a lot of computing time by guessing. Those that guess right survive. Those that don’t, perish.

This is the way insects have evolved, and it’s a pretty good system.

It doesn’t apply to the mammals, who have evolved a pain-and-pleasure complex, to punish and reward behavior.

So…who is being punished and rewarded? The entire mind. The “self.” The decision-making system is being trained – just as we train dogs and educate children.

There would be no conceivable point to training someone to make better decisions if there weren’t actual decisions being made.

And people really do make decisions, every day of their lives. The idea that it’s all an illusion is contrary to real-life experience.

The idea that it is partly an illusion is supported by real-life psychological experiments, which show that we have an unconscious mind, and that we are unaware of a large part of our own decision-making processes. But the unconscious mind is still part of the person’s self. It isn’t a “god in the machine” or in any way “external” to the self. It’s just a part we can’t consciously see.

If free will is an illusion, what’s the point of railing against the idea of a “conscience”? I mean, other people have no choice about what they do, right? So why are you trying to change their behavior? Or are you not trying to change their behavior, you’re simply acting in the way you must–which is trying to change the way people behave.

I’d argue that human beings really are machines. That is, our bodies and brains are made up of ordinary matter arranged in particular ways, and if we were able to take carbon and hydrogen and oxygen and so on and arrange those atoms in a particular way we’d end up with a human being. And this is what the process of human reproduction and development actually does–our bodies take in atoms arranged in particular ways, and we rearrange those atoms, and a human being pops out at the end, and that human being develops over time given various inputs into an adult human.

So arguing that humans (and animals) are “machines” doesn’t mean we’re like 1950s science fictional robots. Calling us machines doesn’t dehumanize us, any more than calling us animals dehumanizes us. We’re a particular type of machine that we’re completely unable to create technologically, only by the use of our evolved reproductive organs are we able to create humanlike intelligences.

An animal is a machine. Given an imput, we get an output. Does that mean that when my puppy pees on the floor I should have no reaction? Of course the dog didn’t exactly make the choice to pee on the floor. But if I don’t want the dog to continue to pee on the floor, I can change the inputs to the dog and expect different outputs. I can train the dog to pee outside. Obviously this is only possible because the dog already has some inherent instincts that tell the dog that it’s better to pee outside instead of inside its den. It doesn’t do this because it understands the concept of a house, and it understands germ theory. It just feels uncomfortable when it pees in the den, and satisfied when it pees outside the den. And this is because its wild ancestors lived in burrows, and the burrows would fill up with poop and pee if the animal hadn’t evolved an instinct to pee outside the burrow.

If I never give the dog the chance to go outside it’s going to pee inside, no matter that it has an instinct to pee outside. Same thing with human behavior. We have instincts to cooperate with our social groups, but those instincts can be modified by modifying the human social environment. Raise humans in places where they have to fight to get enough food and shelter, and they’ll grow up aggressive towards other humans. Change the environment where cooperation is rewarded, and they cooperate. But note the funny thing, humans are usually aggressive toward other humans…by acting in groups. When humans fight they don’t just start attacking other humans at random, they cooperate with their buddies to act violently against others who aren’t their buddies.

To really get high levels of human violence toward other humans requires an astonishing level of cooperation between humans. You can’t invade Russia and massacre the Jews without getting millions of humans to agree to work on that project with you.

The point is, if you’re just a mechanism who has no control over what you do or say, why are you here posting meaningless words to try to get us to change what we think? What’s the point? If you really believed there was no such thing as free will, why would you bother arguing against free will? You’re trying to change our beliefs, why do you expect to be able to do that? We’re just robots, just like you’re just a robot.

The answer is, if we’re “just” “robots”, is that we’re not like the kind of simple mechanisms we can manufacture, even complicated ones like Siri or Watson or Wolfram Alpha. We don’t understand exactly how the human brain works, and maybe we never will. As the saying goes, if the human brain were simple enough that we could easily understand it, our brains would be so simple that we couldn’t.

So when we tell our kids not to punch each other, we’re doing it because we want to live in a house where our kids aren’t punching each other all the time. We do that because we’re trying to change the inputs in order to change the outputs. Social norms are just a larger attempt at the same sort of behavior modifications. It’s not an illusion, any more than a memory or an emotion is an illusion. Yes, we aren’t fully in control of our behaviors, people frequently do things even though they know they’re irrational. I just ate three chocolates sitting at my desk, even though I’m already overweight. If I were fully in control over my behavior I wouldn’t have done that. I have a collection of instincts and desires and reflexes that I’m only loosely in control of. I see a pretty girl, and I can’t help wanting to have sex with her. But I can stop myself from grabbing her and ripping off her clothes, because I know that bad things would happen to me if I did that. Her social allies would use violence against me to stop me, and my social allies would distrust me in the future.

And that’s all we mean by “conscience”. I have a set of partially learned and partially instinctual rules of thumb for keeping myself out of trouble, and sometimes work and sometimes don’t. The feeling I get when I consider grabbing a random woman, the “that would be a bad idea” feeling, is just how it feels from the inside to have that sort of learned/instinctive behavioral control. Same with “feeling hungry” when I need food, or “feeling thirsty” when I need water, and on and on. Is hunger an illusion? I mean, it’s just never signals sent from my digestive tract to my brain. A neurologist could attach electrodes to my brain and cause me to feel the sensation of hunger when they flipped a switch one way, not to not feel the sensation of hunger when they flipped the switch another way. If I felt hunger even though I was actually full because an electrode was stimulating my brain in a certain way, then I suppose we could call that feeling of hunger “illusionary”. Except I’m really feeling the sensation, the sensation isn’t illusionary. Same thing with the “you’ll feel bad if you do this thing that hurts this person” feeling. Sometimes I feel that sensation even though rationally I know I shouldn’t, and sometimes I don’t even though rationally I know I should. Sometimes I’m a jerk to people even though I know rationally I’d be better off not being a jerk, and vice versa. Other times I’m nice to people even though I don’t feel like being nice to them, because rationally I know being nice to them would make me better off. So much for the illusion of free will.

You can’t possibly “save a lot of computing time by guessing” because there’s literally no other type of computation for an agent exploring a world they inhabit. Guessing is the only game in town.

If agents didn’t “guess”… then what would they be doing? Would they magically have perfect knowledge again? Impossible within a deterministic system.

Guessing is literally all we do when we compute whatever it is that our brains compute to navigate our surroundings. We don’t have perfect knowledge, so we are destined to guess. An agent is always “guessing” based on its imperfect model of the world it inhabits. Everything it does is a guess based on imperfect information, since its brain is smaller than the universe it inhabits. It never knows anything for absolute certain, so it is forced to guess.

These guesses are computationally expensive.

Humans are constantly making these kinds of guesses. Essentially everything we do to navigate through the world is a guess at differing levels of confidence. We guess what people are thinking. We guess whether people truly like us or not. Historically, we had to guess whether this dude was going to stab us in the back when we turned our backs. We don’t have perfect information on whether that asshole will pull a knife when we’re walking away. We guess. That is what “imperfect information” is all about. Avoiding a knife in the back is very important for the survival of our genes, so we want to get the answer right. But we don’t know perfectly. We guess.

And we need a fucking big brain to guess right in social situations. Guessing well is computationally expensive.

This is not even slightly the way insects have evolved, based on the normal hyu-mon definition of “guess”.

Insects don’t estimate. They don’t suppose. They don’t conjecture. They don’t have a model of the world inside their head that updates over time.

They work on automatic triggers. A fly doesn’t “guess” that, oh well, since there are a lot of cows below, there will probably be shit. It doesn’t make computationally expensive inferences of that sort. It’s a machine that lands on things that smell like shit without any deep consideration of the matter. Sometimes it lands in a Venus fly-trap, but it didn’t “guess” wrong. It will literally do the exact same thing given the exact same stimulus. Even after a close call, it won’t learn anything. It has no model of the world inside what passes for its brain. The agents in Game 2.0 are quite literally smarter than those flies, in the sense that they can learn more about their environment based on past experience and past mistakes.

A fly doesn’t “guess” that it’s about to be swatted. It’s a machine that moves quickly whenever a shadow crosses by. This sudden movement can be an incoming hand, or it can be an eagle’s shadow from a hundred feet away. The fly isn’t guessing whether it’s one or the other. The fly doesn’t care what the shadow is. It moves regardless. It always moves when there’s a shadow without guessing at the cause. The fly evolved a heuristic that works well without any need for deep computation of any actual problems.

The fly doesn’t need to “guess”. Guessing is much more sophisticated process. The fly works by spamming the environment with as many copies of itself it can make. It’s a completely different evolutionary strategy.

This is a peculiar system. It works well enough for insects. But the insect system is terrible for any creature that doesn’t reproduce the way insects do. Insect-like decision-making would mean almost instant extinction for any energy-intensive creature with few offspring.

For larger animals that put more resources into sustaining their lives, there is a big need of real guessing. There is a big need of computation to better understand threat vs food. For larger animals that take care of their own young, there is yet even more need of guessing. There is yet even more need of computation, to guess whether this creature actually is its offspring. When the parents leave and come back, they would be taking a giant risk that they would be parenting the wrong creature. This actually does happen to birds quite a lot, with their smaller brains that are more easily fooled. It doesn’t happen so often with mammals.

Big mammals are better at guessing. They’re better at inference. They have bigger brains to help with all that guessing. With larger size and fewer offspring, they face a different evolutionary challenge. Learning about the world is essential. Guessing better is an absolute requirement.

Real guessing is computationally expensive.

The agents in Game 2.0 make “decisions”. They reproduce and evolve. They learn about their environment. They do not blindly repeat past mistakes. The very simplest form of this agent-based learning is actually not that hard to program.

This is Game 2.0’s version of the pleasure-pain response. They learn as they go along. The ones that learn succeed. But it’s still deterministic. Every time the program is run, the same ones learn the same things, and succeed in the same way. It’s not nearly as sophisticated, naturally, as the pleasure-pain response in mammals. Each simulation is run only a few hours.

If we could run Game 3.0 with better physics, for several years of game time, you’d see animals that had evolved a pain-pleasure response – or from your perspective something that “gives every appearance” of a pain-pleasure response. It would be a more advanced adaptation from what they can do now.

I don’t know your “real-life experience” so I can’t judge it firsthand.

I do know that based on the “real-life experience” that people sometimes have, they sometimes think it’s a good idea to blow themselves and others up, or drink poisoned sugar water, or believe that all of biology is wrong because they think that a book from the Bronze Age that they never actually read tells them that science is wrong. So I’m not inclined to trust anyone else’s “real-life experience”, especially if they can’t even define their terms. As often as ought, real-life experience is idiotic.

But I can tell you my own real-life experience. I don’t experience free will. There is nothing inside me that sounds anything like what they say. I have no idea what people think they mean by the term.

Every definition I’ve ever seen is either 1) completely subjective and not shared by me, 2) completely incoherent, or 3) completely fucking stupid.

Now, I make “decisions”. I make decisions in the same sense that agents in Game 2.0 make decisions. My mental process is obviously more complicated, but there’s no “free will” lurking about anywhere. You can trumpet the personal experience of yourself and your predominantly religious fellow-travelers, but just like everyone before you, you retreat to the subjective explanation when all the other illogical arguments dissolve away, so while I’m not going to deny that you feel something that you’ve decided to stick a highfalutin philosophical label to, I’m not going to take your experience as any kind of credible evidence. Your experience is real in some sense. It’s the strange philosophical deductions you contrive that are suspect.

This should read: for several billion years of game time.

We can’t run this simulation, but it’s the core of the debate.

The fact that people often make bad decisions is not an argument against the fact that people make decisions.

I have to doubt this. If you’ve never stood in front of a well-stocked buffet and wondered, “Wow, what should I start with,” then you can’t possibly be a real person.

Sure, but I didn’t make that argument.

My argument is that people make decisions in the same sense that agents in Game 2.0 make decisions. Much more complicated, but the same sort of mechanical process.

For me, there is no alternative to this belief. Nothing else is sensible. People can claim “free will” but I don’t know what that means. What’s the physics of free will? How does it work? Nobody ever answers this with anything I can understand. At worst, they can’t even answer with anything that’s not self-contradictory. They can cite their personal experience, but I don’t tend to believe people about their own evaluations of their internal experience.

When people turn the same questions to me, I can answer. What’s the physics of our minds and our decisions? It’s physics as we know it. Just physics, nothing more. (Some people squeak “quantum” right about this point in the conversation, as if it meant something, but it really doesn’t. There are plenty of deterministic interpretations of quantum mechanics including the MWI.)

My processors are limited, and occasionally they’re overwhelmed by available stimuli.

There is nothing mysterious about this to me.

What does that have to do with free will? Even without free will, our brains make decisions. Computers make decisions too. I’m with Hellestal - I don’t even see what could be mysterious about this.

I’m just at a loss how to communicate with someone who has no sense of self-will.

It’s like talking with someone who is color-blind, who argues that my sense of “red” must be an illusion.

Anyway…this is a pointless diversion, so I’ll drop it.

No need to drop it just yet.

But self-will, or the ability to make decisions, is not controversial. On the other hand, the concept of free will, in its traditional contra-causal form, can’t exist because it’s an incoherent concept.

Some philosophers try to re-define free will so that it means just the ability to make decisions, and then they can say that we have that. OK, but what’s under discussion is the fact that our physical brains are alone responsible for these decisions (using external inputs), and that fact contradicts what most people have traditionally meant by “free will.”