Do you believe in a soul?

Mine is not any better - perhaps worse, but I consider any belief system involving the continuation after physical death of an essence (even if not a personality, per se) as accepting the concept of a soul. I don’t remember reading about any belief in a Christian style soul, if that helps.

Let me start by saying that I agree with you in general. As for the above, why should we not expect to find consciousness in a purely material universe? And how do you define purely material? My definition includes interactions and processes, things that you can’t weigh directly. If consciousness is a kind of information processing (which makes sense) I don’t see why it is any stranger than addition.

Also, how do you define subjective and objective? If by objective you mean that all observers agree, then you have the unreliable eyewitness issue. Say we can read neurons and build an image of a memory from them. Would that memory be objective or subjective? It might accurately represent what is in a person’s brain, but not the reality that the person observed.

I think we can detect consciousness the same way we detect love. We deduce from a person’s actions that he is conscious, just like we deduce that they are in love. We know both things about ourselves. We might be wrong about another person, but our conclusion is not totally a guess.

That’s very interesting. That is a real way to attack this issue I hadn’t considered. I love that thought experiment.

I tell you what I would expect out of a universe that was purely material but allowed consciousness - I’d expect consciousness to be a property to some degree (from 0 to 100%) of everything in the universe. Or at least, of either matter, or energy, or both together. I would expect to find it predictably. I would expect conscious inhabitants of that universe to be able to, under rigorous conditions, to create machines with high degrees of consciousness - after all, it’s a natural property of matter in that universe.

As the quest for artificial intelligence has taught us, engineering conscousness may be an incredibly, perhaps impossibly difficult goal to achieve. But again, it’s a problem of stupendous complexity, not one of the properties of the ions and molecules that make up the brain. There’s absolutely nothing to indicate anything special about the brain’s matter constituents or the exchange of photons between them. What is special is the galactically immense number of connections, and the orders-of-magnitude larger combinatorial units those vast numbers of synapses can generate in concert. The numbers of calculations even simple organisms like tiny roundworms can perform with hundreds of neurons far outstrips the processing power of any model of intelligence humans have ever created. A cockroach probably has more computational capacity than our most powerful supercomputers. They may not be as fast, but each neuron, a fantastically complex signal processor in its own right, multiplied by all the different connections it can make with some number of other neighboring neurons, multiplied by the different ways (synapses don’t behave at all like simple logic gates) those connections can transmit signals, and you’ve got a computer of immense power. If we can’t presently come close to reverse-engineering an organism that has only about 1000 cells in its entire body, achieving something like concsciousness may be a long way off. But again, that’s not an excuse to evoke some exotic explanation for the phenomenon of mind, illusory though it may be. There are other monstrously complex problems that have resisted all attempts to model them, yet few demand supernatural explanations for, say, multi-body gravitational systems when recognizing nature can compute what we cannot.

Again, it’s the “you can’t eat my lunch” problem. Everything else in the universe can, in principle, be observed (the Heisenberg uncertainty principle, observer effect, and–perhaps–the event horizon of a black hole notwithstanding). There might be practical problems with observing it, but there is in all cases presumed to be a fact out there–a thing, an event, an interaction–that could theoretically be determined. If we knew everything physical about the universe, we would presumably agree about everything going on in the brain (and everywhere else).

But consciousness–the mind–is different because it involves not just information processing, but the way I process information, the way I perceive it. There is no way–even in theory, in principle–for you to know how I experience the color red, or the taste of chocolate. Even if we could somehow make your neurons fire in exactly the same way as mine do when I see the color red (and it may be that brains are too individual for that even to be possible–your neurons may be hooked up differently, and if they were hooked up like mine, you’d have my memories and think you were me) you would only know what you experience when your neurons fire that way. You can’t observe the qualia (the what-it’s-like) of my experience, because it’s my experience. That’s why you can’t eat my lunch. As soon as you eat it, it’s your lunch, and I don’t have a lunch (or I eat a different lunch).

If you believe (as I do) that the world is just bits of matter and energy and the interactions between and among them, then you can infer that the mind takes place in the brain and that if your neurons fire in the same pattern that mine do (and again, this may in fact not precisely be the case) then you must have the same experience I do. And I believe you would infer correctly. But there is no way to test it, no way for you to actually observe what red looks like to me or what chocolate tastes like to me, only–at best–what my seeing red looks like to you and what my tasting chocolate tastes like to you. It is, in fact, possible, that I experience nothing when my neurons fire in response to seeing red or tasting chocolate. There is absolutely no way for you to know.

This impossability becomes clearer when we look at animals. We assume that you and I behave similarly enough and have similar enough brains that we likely have similar experiences. But look at the threads about animal cruelty–we can know whether a lobster reacts to being boiled, even whether it senses pain, but can we ever know how it experiences pain, whether it suffers? Can we know how a bat experiences sonar-detection the way we know what it is like to see or touch something? Even if we knew every connection of every neuron in a lobster or bat brain, we could not. Not without being a lobster or a bat.

Since nothing else in the universe seems (of course there’s no way to know) to have this fundamental subjectivity about it, this in principle unknowability, it seems reasonable to be surprised by it. If the universe is purely material, how can we know every fact about every particle in a lobster, and still not know something about it? It seems like a contradiction. But it isn’t; the contradiction is (I think) illusory. We can’t know what the lobster experiences not because it’s mind isn’t physical (in the brain), or because the lobster has a soul, or because we do and it doesn’t. It’s just that to know what the lobster experiences, we’d have to become that lobster.

I think given time we will. Unfortunately, we’ll never know it for sure.

True, but so what? There are lots of things we’re forced to infer because they’re impossible to observe directly, sometimes for very practical reasons. That just means we’re not gods, not that those things are somehow special or different.

::Sigh::

I agree with you, Loopydude, and I’m trying to explain why you and I seem wrong even though we aren’t. I assume you can see the difficulty!

The thing is…I think we really do seem wrong (even though we aren’t), and you don’t.

Quantum mechanics and higher math may be full of things that we just plain can’t ever know. But I can’t think of anything else, certainly nothing in our everyday experience, that is fundamentally and in principle unknowable the way another persons consciousness is. All the other unknowable things are just unknowable practically. We might never be able to get around the practical problem, but if we could…we’d know them.

The only exception I can think of–and maybe this is what you have in mind–is hte persistance of unobserved objects. By definition we can’t know that they’re there without observing them (at least indirectly). Most people, I suspect, when they first realise this have a moment of surprise, a “Whoa, Dude!” reaction. But the rest of the time, you’re right: we take it for granted that the unknowable is true–that things are there when we turn our backs. Similarly, we can’t ever know that we aren’t in a Matrix-like illusion, but we accept that we aren’t without more than occasional wonder.

My whole point is that the mind-body problem isn’t any more profound than that. That we can and should accept the reality that conciousness resides in the physical structure and behavior of the brain and that other brains possess it as well as our own. But if stopping and thinking about it doesn’t make you stop for a second and go, “Whoa, Dude! You could all be zombies!” then you’re either more insightful than I am, or less.

First, let’s start with the assumption that the universe consists of objective reality. If I read you correctly, you agree.

Now, you are surprised about how subjectivity gets into the picture. I think the missing factor is information processing. Every living thing processes information in some way - even if trivially, by reacting to chemical stimuli. An amoeba processes information simply - and in the same way from amoeba to amoeba. (Assuming genetic near identity). Is this subjective.

When we move up to more complex information processors, like our brain, each processes information in a slightly different way. Since we cannot exactly simulate the processing of another complex brain, we simplify and abstract, each in our own way, and we can’t know what it is like to be a lobster - or another human.

Coming back to the amoeba, we can write down a precise description of how it reacts to a stimulus. In that sense we objectively understand it. But each of us will process that exact description in a different way - so our deeper understanding is subjective.

Could a conscious computer understand another conscious computer with the same program? I’d say no, since consciousness implies the self-modification of our “program”. If the two computers had different inputs, and different experiences, their programs would diverge and they would no longer be able to objectively understand each other - which I would define as simulating the experience of another and getting the same output. Even pre-conscious animals have some of this behavior - dogs learn, and could an untrained dog “understand” the reaction of a trained one?

Once brain programs get complex enough to self modify, you lose the chance of objectivity. Self modification is the key - channels and electronics that process information and which do not self-modify can reproduce things exactly - so gzip produces a perfect copy of its input, and all gzips work the same.

Given this, subjectivity is not all that surprising, and requires nothing beyond the material to explain it. If someone wants to call our distinct programming our soul, so be it, but as of now it does not survive our bodies.

So, consider this: say we could build a computer that could simulate our brains and bodies perfectly. This would require a way to non-destructively scan our neurons. The code would simulate the connections, their strengths, the chemistry, and our sensors - eyes, ear, skin. Would a mapping of our internal structure onto this simulation be a transference of our “soul”? We transfer to an empty receptacle so the issues about subjectivity I brought up before no longer apply.

This would be true even if there were no such thing as uncertainty. It’s from the very definition of consciousness.

Person X and Person Y have two distinct consciousnesses. Even identical twins undergo different experiences. Person X views his consciousness through the lens of X. Person Y, even if he had access to the consciousness of person X in someway, would view person X’s consciousness through the lens of Y. Thus, he can never truly know X’s consciousness as X experiences it. No Matrix required.

I think you’re overthinking this.

Maybe I’m just overexplaining it. I agree with everything you wrote.

OK, I get it! :smiley:

Me too. We all agree, thread over. :smiley:

Hooray! On to the rest of the Great Debates! At this rate, we should have them all resolved by next Tuesday lunch.

One of the first things software engineers (raises hand) realized in the quest for AI, there is a qualitative difference between exterior appearance of consciousness, and real consciousness. Nobody has any idea where to begin on the second topic. It’s impossible to say if it’s really complicated or not, because present courses would never lead to it.

I gotta call you on that. The thing to indicate anything special is self-awareness. It’s simply a different thing than an incredibly complex computer. What you’re suggesting is analogous to suggesting that if you keep adding force to an object, it’ll eventually go faster than the speed of light. Of course that won’t happen. It’s just a different animal.

I don’t know if you have anything to support that, do you? My guess (pulled right out of my…gut) is that today’s computers have a lot more computational power than a cockroach.

Say - there’s something. Gravity - no direct evidence for it - put galaxies full of circumstantial evidence. Good example. All matter (and probably energy) has gravitational pull. I could buy that everything in the universe has some self-awareness (however immeasurably small), or that nothing does. But saying that things have no gravity until you get enough of it together, and in a complex way, doesn’t make sense to me.

Working on a more quantitative response, based on a couple articles I’ve read already…give me time…

That’s frustrating, because I thought you and I had a common denominator we were discussing. But the paradox you’re addressing is not the paradox I see.

I am willing to accept that you are self-aware. If not, no biggie - I’m even more convinced that I’m self-aware. I don’t care if anything else in the universe is.

I also believe (but can’t prove, for the you-can’t-eat-my-lunch reason) that a rock is not self-aware. Nor a strand of DNA. That’s the paradox.

How could self-awareness happen in a universe of non-self-aware matter and energy? Saying ‘our consciousness is the energy zipping around our brain’ is missing the crux - I understand stuff. if I wrote a comprehensive program to know every word of English and to respond appropriately to every input of English - no matter how good the program was - there’d still be a fundamental difference. I understand English. The program doesn’t.

No matter how much of the nature of things it would explain, and how easily it would fit the model, to say the machine understands, or I don’t (it’s just an illusion), it’s intellectual dishonesty. I grok.

There is an ‘I.’ If the ‘I’ were illusory, I wouldn’t question whether I existed. In other words, the ‘fooled by an illusion of consciousness’ argument doesn’t work because it presupposes someone to fool, which defeats the argument. I am somewhere in myself which no non-organic matter can claim.

I don’t care. I’m willing to accept on faith the universe is real, I’m not experiencing some giant virtual reality, and that the lobster either is or is not conscious (tough call on lobsters), but that there is something in the universe which has no consciousness, and something that does.

You addressed the last part of my universal expectation list, but not the first. Incidentally, I agree with it - but it won’t be AI guys who manage it, it’ll be bio-engineers. Today viruses, tomorrow protozoa, in a century conscious, organized, organic animals. But the weird thing is we won’t be able to take credit for the consciousness. It’ll have done that on its own somewhere along the way.

Anyway, could you address my first part?

Very true. When I took AI over 30 years ago the things they were working on was equation solving (integrals,) generating directions, really good chess, speech recognition andWinograd’s block world, with vision. All of that’s been done, but we’ve come no closer to modeling intelligence. They all involve processing outside information and doing something with it, which animals do also. Consciousness is something more.

The architecture of the brain is very different from that of computers, but I don’t think that is the crucial difference. Has anyone developed technqiues to analyze and understand programs, and then have that program apply the technique to itself? GAs and other learning techniques tune very specific parameters - this kind of thing would go to the heart of the self -intropection that seems to distinguish us from animals. Our information processing power is so general that we use it to analyze our own thoughts - that is something no animal can do (except of course maybe chimps and dolphins - maybe.) Now that would be an interesting dissertation project.

Fuzzy logic (real fuzzy logic - not what the common perception/highjacking of the word has become) has taken baby steps in that direction, but that’s all I can think of at the moment. Fuzzy logic keeps track of what has been relatively succesful and unsuccesful in the past and generates “rule patches” that are constantly getting refined as the program experiences new things. I still don’t see how that would lead to consciousness.

What’s GA stand for?

It’s a fair point that computers are usually used to solve ad hoc problems, and nobody’s ever given one the problem of thinking about itself until it understands itself. If it did, though - I still don’t see that it would truly be thinking about itself. The bits would flip on or off with tiny bits of electricity, and would well approximate a cohesive whole, but at the end of the day, it’s just a bunch of bits sitting there unaware of each other. It’s not a truly centralized system, it just acts like one. There’s no ‘there’ there.

The part of you that’s self-aware - do you think of it as one neuron? Or one impulse that travels around the brain keeping tabs on everything? Or just an illusion of centralization, and there really is no ‘you’ except the momentary flashes across synapses? Then where does the persistence come from?

For that matter, and I think this is an interesting thought experiment that could be helpful to those who believe in free will yet nothing but the observable matter and energy - how do you wave your hand? Impulses are sent down the nervous system. What caused that? An impulse from your brain. What caused that? I think a release of chemicals from a neuron - anyway, somebody explain the process. What caused that? You can probably see where this is going. At what point did we cross from your will to causing a physical action? I know DtC’s answer, I believe - your will is that first burst of energy. But if that’s the case, ‘you’ didn’t have any control over it. You didn’t ‘decide’ to make that energy - it was just a reaction to something. So I think either free will has to go, or the rejection of ‘plain old’ matter and energy.

Or maybe something in quantum physics could bridge a gap between actually controlling something. I honestly don’t know - I’m just trying to point out different issues that I haven’t seen explained to my satisfaction regarding consciousness, free will, and a “it’s really as simple as all that” view of the human mind (and possibly other high-intelligence animals).

This might have been one of those GD threads in which I had my ass handed to me, but I recall thoroughly enjoying this conversation on the very same topic a few years back.

(apologies for this not being cogent to the direction this thread has taken; my Search function wasn’t working earlier)