What Is Consciousness?

We don’t need to invoke rocks; every human brain has around 10[sup]26[/sup] atoms in it, more than enough to reproduce every other human mind state in the world, as well as everyone else that has ever lived. It is nice to know that I’ve got Darwin’s, and yours (and Searle’s) mindstate encoded into the matter in my head, but it has no practical significance. I’m only interested in the one mindstate that is connected to my body’s input/output functions.

When and if we manage to decode the computation process that represents consciousness in our brains (and bodies), we will only be interested in that singular set of computations. The others will still be there, but they will only be significant as a source of possible error.

I’m actually kinda in the same boat (raft?)—if you look through my previous contributions to similar debates (not that I could recommend you do that), you’ll see that I used to be a very outspoken advocate for computationalism myself. In fact, I even used to think that the rock argument could probably be resisted on the basis of Aaronson’s complexity argument, although I now have my doubts. What swayed me, ultimately, and at least for the time being, was really the realization that there is no unique way to pick out from the causal structure of a system any given particular computation, or any set of experiences—those structural facts simply underdetermine the set of facts we have access to. It’s a bit like those paint-by-numbers sheets: without specifying what number corresponds to which color, there is no way to uniquely pick out what picture is being represented; and different specifications—different codes—pick out different pictures.

Well, I think that’s the first time I’ve seen anybody actually accept the argument, and just decide to bite the bullet, so props for that. Personally, I’d be quite troubled by the extreme panpsychism entailed by it, but I suppose there is nothing that logically precludes it.

However, even more troubling, I think (and of some practical relevance after all) is the implication that basically, all your experiences and beliefs fail to be veridical (with overwhelming likelihood): by a huge margin, you’re far more likely to actually be, say, a rock, or a tree, or a fish, or any other system in the universe just executing the program producing your experience (by virtue of executing every program), than you are what you take yourself to be. Indeed, it might even be that you are something quite different from rocks, fish, and the like, because these things don’t actually exist in the real universe, nor things that are even remotely like it, since the program giving rise to your experience probably is not just by happenstance one that produces a fairly accurate reproduction of the universe. This, it seems to me, nets you the worst aspects of extreme Cartesian skepticism, and not merely as a possibility you can’t refute, but one you should consider to be overwhelmingly likely.

Basically, that the way neurons interact is a computational process. Neurons send signals to each other, and there is nothing in that process that any other signal-processing device can’t do.

I have confessed, any number of times, that I’m simply not in your league. I’m grateful that you engage with me at all; you would be well justified to write me off as a parvenu. (Which…I mostly am.)

I do my best. If that isn’t good enough…then what?

Maybe consciousness is a side-effect of the specific shape of curvature in space-time due to electromagnetism in the brain.

Um… All of those things are extremely precisely understood, via Maxwell’s and Einstein’s equations. Also, space-time curvature is extremely coarse in scale, operating on planets and moons, not so much on molecules.

There might be some unknown influence involved, but it’s pretty sure those two aren’t the ones.

(It would, however, be an argument in favor of the possibility of Artificial Consciousness, since space-time curvature would respond in exactly the same way to electromagnetism in electronic circuits. I don’t mention this as a rebuttal to what you said; it’s only a thought which your idea suggested to me.)

I’m suspicious of any “purely physical” model, such as the heat and gravity Half Man Half Wit has alluded to (as comparisons, not as actual causes.) Physical causes tend to be detectable by physical instruments. If there actually were a consciousness fluid or consciousness chemical, I’m almost certain it would have been detected by now.

(And the more “undetectable” it becomes, the more “woo” it starts to resemble. That’s the critical failure, in scientific terms, of the definition of “soul” in the most common theological models: since it is defined as “undetectable,” it can’t be the basis of any valid speculation. If no one can know it’s there…how do we know it’s there?)

At a sufficient level of abstraction, you can conceptualize anything in these terms. But of course, that’s not all neurons do—they produce certain chemicals, change their physical state, and so on. Compare it to a combustion engine: its parts also send signals around, and perform certain state changes, all of which can be nicely captured in algorithmical terms. But of course, no such algorithm produces any driving force!

Please, stop trying to pull the flattery card. If you actually believed any of that, I’d imagine you’d give my arguments a little more credence.

Well, either you believe that your arguments trump mine—then, you should bring those arguments forward in their best form. Or, you don’t believe so—then, you should reexamine the justification of the opinions you hold. You can’t have it both ways, on the one hand deferring to the alleged superiority of my knowledge/arguments/whatever, on the other hand nevertheless disregarding that when it comes to your own views.

It seems to me that the two sides of this question, if I understand them correctly, can be fairly reconciled with explanations from complexity theory. In short, though consciousness may be entirely an informational construct, its complexity gives rise to emergent behavior that cannot be predicted exactly from its inputs. Given sufficient computing and analytic capacity, we could eventually create a convincing simulation of a mind. But not of any mind in particular, in fact, you likely could not simulate the same mind on the same machine twice in a row.

Emergence is a property of complex systems all around us, both living and nonliving. There’s nothing dualistic about it, it simply resists most if not all efforts at predictive analysis.

Kind of like the way Samsung can imitate all the functions of an iPhone, but it is still not an iPhone.

Some thoughts to try to get past the rock argument.
We navigate and survive in our environment so we at least know that, despite our brain mapping to any number of computations, it definitely maps to navigation in 3D environment and survival in general.

Those computations can be confirmed because we have output from the computation that causes physical movement within the environment and thus a pretty obvious measure for whether the computation is relevant to this context.

Can we extend that idea to consciousness, basically treating it as another computational output that increases our chances of survival and interaction with environment?

If we can at least show that it does have a net gain on our appropriate interaction with the environment (e.g. social interaction) then we at least have somewhat of a connection between the consciousness computation and our environment similar to our 3D navigation through the environment.

In that case, we could argue that the rock is unanswered and uninteresting because we use output/interaction with environment as our measure of which computation is happening.

RP,

Agreed, consciousness enables us to adapt to environmental changes.

Consciousness manifests itself without the aid of ‘mapping’ by an external source.

Rocks do not adapt to environmental change, they decay.

Crane

Agreement. (This is also my favorite answer to “Free Will” debates.)

If a fully simulated mind didn’t have true “consciousness,” it would have something else that was in all ways indistinguishable from consciousness…even to itself.

Agreed. If the rock is randomly produced, and changes states randomly by nature, then, while there is an artificial mapping that corresponds between its states and a conscious mind – but there is also a mapping that corresponds between any five card hand in a game of poker and a royal flush. That doesn’t mean that my lousy pair of sixes is a royal flush in any conceivably meaningful way.

No one else in the universe perceives it as one, and no one else in the universe acknowledges my mapping. It’s like Humpty Dumpty defining words according to his own private dictionary. He can do that…but he won’t be able to communicate.

Also, consciousness adapts to internal changes. A person could (for a short time) be put into total sensory isolation, and would still be able to think, to remember, to create, etc. After a time, the lack of real utility to consciousness would probably cause it to wither (after, first, a period of very uncomfortably insanity.) But, for a time, the mind can use itself as a source of input.

The obvious response, though, is that, via a fortuitous (and completely ad hoc) mapping, the rock’s decay would emulate organized thinking, just as every five-card hand can be mapped to a royal flush. However, the excessively finagled nature of that mapping makes it uninteresting, just as Humpty Dumpty’s dictionary is.

And, there’s the basic problem that the rock is not a computational system, although the mapper may be.

Crane

Or, conversely, everything can be a “computational system” – tree-leaves blowing in the wind, ants in the kitchen, nitrogen molecules in a windstorm, the “snow” speckles on an old-style TV set on a non-broadcasting channel – or real snowflakes – and so on.

I respect the reduction ad absurdum, but don’t buy the lesson it is intended to convey. (As noted above, it’s very similar to the Chinese Room in this regard. Some of us hold that the Chinese Room does comprehend Chinese.)

So…if the rock actually does emulate all of the informational properties of consciousness – and if consciousness is informational in nature – then the rock is conscious. If the rock is not conscious (and none of us believe it is) then either consciousness is not informational (which I don’t accept) or else the rock does not actually emulate all of the informational properties of consciousness.

The logic is elementary… The key issue – is consciousness informational? – isn’t!

What do you mean by informational in the context of consciousness.

Everything is informational, but everything is not computational.

Crane

Well, no, not everything is informational, exactly. As Half Man Half Wit points out, heat and the force of gravity are physical entities.

I’m using the word informational in mostly the same sense as computational. I think consciousness is in the signals that neurons pass around to each other. It all comes out of the network of information that is being passed back and forth by neurons and neuron clusters.

Nowhere in our brain is an actual fire, but we are conscious of heat. In our minds, the “picture” of fire is hot. It’s actually only a picture, a depiction, a “mapping.” But we feel it as fiery.

This, in my opinion, establishes that the information is what we “feel.” Others here hold that “feelings” must be material in some way, and can’t be explained by mere pictures, or images, or mappings. A picture of a volcano isn’t hot.

Yet…a pattern of neurons in our brains is not hot either…but we feel heat because of it.

Perhaps consciousness is the being part of a human being.

I don’t really think the Chinese Room does understand Chinese because it seems that understanding Chinese means that you take the input and use it to create/change an internal model of the world, and then produce output based on that model. To me, it seems that the modeling is the understanding part.

Just getting the right answer isn’t enough for understanding because there is a random number generator somewhere that would produce the exact same result.

You may be right… On the other hand, the danger of insisting on modeling is the fallacy of the homunculus. Where, exactly, in the brain does this modeling take place? It appears that the brain (like the room) distributes the process of interpreting widely, so that you can’t point to any one specific place it happens.

Eh? Random processes would give random answers, and not the “right answer.” Instead of answering, “Good morning, Mr. Meing” with “Why, good morning to you, Miss Wei,” it would answer “Wccz aztpgs mgzblrg zbkgkr.” Or something.

(Yes, in theory, a random system could turn out the right answer indefinitely. It’s like the argument that one can roll snake-eyes on a pair of fair dice any number of times in a row. Ten times. A hundred. A billion! But in reality…this simply does not happen. Grab a pair of freckled cubes and give 'em a tumble, and I’ll give you four-to-one odds you don’t roll a two. Faded?)

(There is a role for randomness in the overall process. It’s one way of keeping the system from giving exactly the same response every time to the same input. This was something Edgar Allan Poe did not know of when he argued that machines could never play chess. He said that, given the same board arrangement, a machine must always make exactly the same move. We know better today, because we’re aware of machine randomness.)

I don’t believe that randomness, in the brain, is just a matter of flipping a coin for a yes or no decision. I see it more as the method of accumulation of data that is stored as shapes, like membership functions in Fuzzy Logic.

An example would be close to the canon, field, pond, Pi problem. The brain is good at patterns and ratios. Assume it characterizes two sensory inputs as a circle and square of equal size. Over a lifetime data is observed randomly and stored as the amount of data observed (square) and how much of that data falls within the circle. The ratio of the circle to the square is Pi/4. In a Fuzzy Logic calculation the square would be further from the pivot than the circle so the ratio the brain would feel is Pi. This does not require language, the result is felt. The process is random but the result is deterministic. This is a stochastic process that cannot be mapped.

So, the brain is categorizing and balancing random data, not flipping a coin for decisions.

Bats, calculating moth target trajectories, are very good at matching parabolas, but lack formal training in math.

The brain balances proportions and ratios. Filius Bonacci was working with ratios of rates of exchange when he began to push algorithms and place value. Napier used ratios to create his log tables. Seems basic to the mental process.

Crane

First, let’s skip the homunculus argument for a minute. The reason I think modeling supports “understanding” is because of the additional information that can be extracted from the model based on a correct application of the input. If the input is “it’s going to rain in a few minutes” the person or system that understands that is able to extract additional details from their model and it can influence future behavior. Maybe the person inside the Chinese Room opens up an umbrella as he continues to process input and output.

But, understanding isn’t consciousness, so we are really talking about a related but somewhat separate issue.

Regarding the homunculus: It’s only a problem if you make (IMO) some very simplistic assumptions about how the system works. There is nothing to prevent having a pyramid of processing as low level details get filtered and transformed and passed up to the next level, it’s ok to have a final level of processing. The homunculus is only a problem if we start at the top/final level of logical processing and assume that it is being fed the exact low level detail that we started with (e.g. vision and picture).