What Is Consciousness?

You might be right; a lot of people seem to think that. Martin Gardner always struck me as a very clear-thinking chap; here is his review of Douglas Hofstadter’s I am a Strange Loop, in which he rejects this self-referential idea of consciousness

Despite my admiration for the late Mr Gardner, I think he is wrong; the secret of consciousness is only mysterious because we can currently only experience it from the inside, and in due course it will be something that we can understand and manipulate with some considerable degree of competence.

Well, part of the joy of this is that we already have created those .05% conscious machines: microprocessors that govern astonishingly complex operations, taking into account conflicting criteria. We can simulate an ant, for instance – if not the whole ants’ next.

I’m told (but don’t know) that they’ve simulated human neurons, and even small clusters of neurons.

That’s one of the nice bits of reductionism: it provides a recipe for construction.

I don’t know. Did you imagine I thought I did?

How do twelve very intelligent individuals, having a committee meeting, combine into one of the stupidest decision-making entities ever known? (The whole is vastly less than the sum of the parts!)

That’s the second time you’ve accused me of following the fallacy of the homunculus, and I think you’re being very unfair to me in this. I don’t.

In your example, our perceptions are different because two different members of the committee show up to the meeting drunk. Since they’re the members reporting to the committee – the only members who can inform the committee of the data they have gathered – then the entire committee has the perception they bring.

It doesn’t have to be an entity with lesser consciousness: if your eye gets put out in tragic potato-chip accident, you lose binocular perception. A mere physical change completely alters your perception of the world. This has nothing to do with any homunculus at all.

But they are vastly less conscious of their own internal states. They aren’t equipped for navel-gazing, whereas we, as fully conscious entities, are.

I disagree. A mere photocell is “conscious” of light and dark. A feedback loop could be built to measure who “certain” it is of light and dark. A dimly lit room might or might not trigger the cell, and the feedback loop measures this. Further loops could assess the confidence level of the previous measurements. This starts to approach a very primitive consciousness, and awareness of the system’s own awareness.

Not on a discussion board. When people start getting snarky, for instance, I re-assess my evaluation of their trustworthiness. (I hasten to add that this doesn’t happen when we merely happen to disagree. Thoughtful disagreement is wonderful. Snarkiness, sarcasm, derision, etc. are all, in my experience, highly correlated with uselessness. If you had said, “That’s a really fucking stupid yardstick to use,” I would probably not bother responding to you at all, and we would lose any possible utility to each other. I’m very happy to disagree with you – it’s the only way I’m gonna learn anything!)

The problem is that this leads in the direction of “magical thinking.” It suggests that consciousness is not a physical or material phenomenon, but something woo-ish. This was the trap that Roger Penrose fell into. It was the inspiration for the original “Chinese Room” notion, for, as I understand it, the guy who came up with that idea was trying to argue that the “Room” did not understand Chinese.

We Strong AI proponents have the last laugh, for we hold that, yes, the Room does understand Chinese, exactly the same way Yufi Meing does. The individual neurons don’t have a clue, but the overall brain knows the language.

(I also hasten to add that I don’t think you actually engage or subscribe to magical thinking. I can hardly take exception to being called a homunculator and then hurl that term at you! We both have a similar problem: we see the other as wrong, because our ideas lead toward fallacies, but not because our ideas are fallacious. At least, that’s how I see it, and I hope you feel the same way.)

I’m with you on this. Gardner was good – very good – but he was, in more than one way, a subscriber to mysticism. In his chapter on Free Will, he also chokes, and subscribes to a non-material explanation. He was a better teacher than he was a researcher. He could make other people’s points clear, but the points he came up with on his own were (in my opinion) flawed.

This is a highly nontrivial claim. In what sense do you propose that a microprocessor is conscious? What is it that makes one conscious?

To me, a microprocessor seems to be just the sort of thing that’s a paragon of not-being-conscious: it’s a simple thing that merely maps inputs to outputs; each and every of its operations could be implemented by a lookup table that connects voltages at its inputs with voltages at its outputs.

Moreover, whatever function a microprocessor implements may be implemented by more or less every other system—it’s just a question of choosing the right implementation function, the right translation from system states to computational states. In this sense, every rock implements every finite-state automaton (including every computer we could ever build) (Putnam’s example) and a wall implements the Word Star program (Searle’s example). If the microprocessor’s functioning is sufficient for consciouness, then every rock is conscious—and in fact, conscious in every possible way.

The real question is—the hard problem is—if that microprocessor is 0.5% conscious, then what makes it so? Without an answer to this question, I think that the whole model is just built on sand—there’s a fundamental ‘somehow’, something I’d just have to believe without reason, in order to buy into it. It explains nothing—it merely says that there are some systems that are conscious, for some reason, but we knew that beforehand.

And an ant is conscious? And moreover, if an ant is conscious, then is its simulation necessarily conscious, as well? (The usual analogy here is that a simulation of a rainstorm isn’t wet, either, but I don’t really think that works. Still, it’d be dangerous to build a model upon the unexamined assumption that properties of the real thing necessarily carry through to the simulation.)

Well, the 0.5% conscious building blocks in your model effectively function as homunculi. I thought you were proposing a model like Dennett’s, who explicitly calls those building blocks that, as well; so I don’t necessarily use the terminology negatively. The problem, of course, remains: you’ve only started to explain consciousness if you can explain how any of your homunculi is conscious; without that explanation, you need consciousness to have consciousness.

No, every member of the committee only has a limited perception—that’s what the two of us in the example represent. There are actual neuron clusters devoted to tasks such as edge detection or color detection; my question just points toward how those single tasks combine. See, I don’t have a perception of edgeness and color, I have a perception of a red square—there are no different voices in a committe, but somehow, at some point, the whole thing coalesces into a single, unified experience.

As William James first noticed, this isn’t solved by a committee (or, as he called it, ‘mind dust’). Consider again those twelve men, each conscious of a single word: this does not combine into a consciousness of the whole sentence, but such consciousness is what we experience. Now, all of those men could tell their part to a 13th, who then might be conscious of the whole sentence; but then, the consciousness of each committee member would not have any connection to the consciousness of the sentence, rather, merely the consciousness of the 13th member would be ‘my’ consciousness. So, the twelve consciousnesses aren’t at all instrumental in bringing about my consciousness.

But that’s what I mean: I am not in any sense less conscious, I simply have less things to be conscious of, since the sensory input from my eye now is missing.

Well, so we have more things to be conscious of; this doesn’t mean being more conscious of anything.

OK, so now you’re essentially espousing a panpsychist model. That’s fine, and I must confess that I have had some thoughts in a similar direction, but it’s of course a hugely controversial position.

Anyway, I think your narrative is a little off: the feedback loop you propose does not measure how conscious the diode is, but how accurate its judgement of the lighting conditions are. These are very different things! The diode either triggers, or doesn’t trigger; if there now is an attendant consciousness, then it either is, or isn’t correctly aware of the lighting conditions. So your feedback loop judges whether the conscious awareness accords with the actual lighting conditions, but does not say anything about the degree of the attendant conscious experience.

What’s the relevant difference between a discussion board and any other form of discussion?

Neither Penrose nor Searle propose any sort of magical explanation for consciousness—Penrose proposes new physics related to the interpretation of quantum mechanics (a model he isn’t alone in championing, albeit his motivations are different than most), and Searle fully believes that conscious machines are possible—however, he does not believe that computation suffices for consciousness. A relevant quote from an interview with Searle at [Machines Like Us:

[QUOTE=John Searle]
Could a man-made machine – in the sense in which our ordinary commercial computers are man-made machines – could such a man-made machine, having no biological components, think? And here again I think the answer is there is no obstacle whatever in principle to building a thinking machine, because human beings are thinking machines.
[/QUOTE]
](http://machineslikeus.com/interviews/machines-us-interviews-john-searle-0)

‘Homunculus’ doesn’t mean, or imply, ‘magical thinking’: it’s a problem with the logical structure of a proposed explanation for perception (an essential circularity leading to an infinite regress), mental content, or intentionality, not the allegation that somebody literally believes in the existence of some sort of soul or other mystical agency within the mind.

Gardner may have been, but in general, the position known (thanks to Owen Flanagan) as New Mysterianism shouldn’t be confused with mysticism: Colin McGinn, for example, is quite explicit in that he thinks that there is a perfectly ordinary mechanism that produces consciousness from nonconsciousness, it’s just a problem of the human brain that it can’t acquire the relevant concepts in order to capture this process, and thus, the whole thing appears mysterious, but is, in fact, quite mundane—like, presumably, calculus to a dog (I have a problem with the latter comparison, namely that there is a difference between dogs and humans in that human beings, unlike dogs, can engage in arbitrary symbol manipulation, meaning that in principle no rule-bound process is beyond our understanding, but that doesn’t really impinge on the point).

I said .05% conscious. It responds to stimulus, and measures some of its own internal states.

Again, an ant is fractionally conscious. And…the deeper philosophical question about the properties of simulated things is unanswered. We Strong AI proponents say, yes, simulated intelligence is truly intelligent.

We’d have to be using the word differently, then. The fallacy of the homunculus is that there is a “little me” in my brain, watching the world through my eyes.

A reductionist version of this is not necessarily a fallacy.

Right: and I reject that latter proposition, because, if it were true, atoms could not combine to constitute consciousness.

I mostly disagree, because, again, this would mean that matter can never be conscious. But since we know it can, a model that prohibits it must be wrong in one way or another.

I’m proposing the only model I can think of that allows matter to become conscious.

Am I? I’d never heard the term before. I don’t actually like the word, because I don’t agree with it’s literal implications.

But the feedback loops can become complex enough to measure the photocell’s internal conditions. When you get so incredibly complex that it can start to simulate possible alternatives – it possesses imagination – then we’re all the way to consciousness of some form.

That we’re strangers. There is an element of trust that takes time to learn and earn. The poisonous fact is that no few members of the SDMB are jerks, pure and simple. The medium is the message (or something.) In part, this is why we are both taking such care to distance ourselves from magical thinking.

That new interpretation, by permitting humans to have feelings yet denying machines to have feelings, crossed the line into magical thinking. It approaches “vitalism” as a fallacy. This is often the problem when “qualia” gets into the issue. Why couldn’t a computer experience qualia? The anti-AI people take this as given, but have never shown any kind of proof. Penrose smugly quips, “But what does it feel like?” The problem is that he can’t explain what it “feels like” to be Roger Penrose; am I justified, then, in concluding his feelings aren’t real?

(I think proof is not possible, as the term “qualia” is intrinsically nonsensical.)

I can’t accept that. This is too similar to the “God works in mysterious ways” defense. I’m pretty sure that our human minds are capable of comprehending the workings of consciousness. It’s a cop-out to say, “We can’t understand it.”

As you note, we can engage in arbitrary symbol manipulation. And, in my opinion, the problem of consciousness is reducible. The brain consists of several loci, each performing a different task. Brain stimulation experiments continue to localize various functions.

It’s vaguely horrible, but consciousness can be seen to be what severely brain-damaged persons lack, or possess in a lesser degree. (Just as “Free Will” is whatever it is that an addict, or someone with severe mental illness, lacks.)

There are certainly some problems not amenable to reduction…but I’m pretty sure consciousness is one that is.

Sure, but why would that suffice for consciousness (any degree of it)? I think you’re taking too much for granted here. (Basically, you’re taking the entire answer to the question of consciousness for granted!)

I still see no reason to believe in fractional consciousness, and even if I did, you haven’t given me any reason to believe an ant possesses it. You want this to be the answer, because you can’t envision any other answer that satisfies your preconceptions (I don’t agree, by the way), but that alone strikes me as far too flimsy a reason for such a far-reaching belief. In the end, it’s entirely possible that your wrong, but since you seem to have convinced yourself that your approach is the only viable route, that means that you’re completely closed to whatever might be the right way to think about consciousness.

Simulated intelligence and simulated consciousness are two very different things. Intelligence can be objectively assessed: if some computer performs well on a general intelligence test, or a suitably strong Turing test, then yes, it will be easily judged intelligent. But this does not mean that it is also conscious, since nobody has as yet been able to demonstrate a link between the two.

No. The fallacy of the homunculus, as I already pointed out, is a problem with the logical structure of the theory, one that particularly often affects supposed reductionist theories, in which one postulates there to be a sort of module that ‘does the perceiving’. The fallacy is that if I do the perceiving by invoking my perception module, then how does the perception module do the perceiving? If it in turn has to invoke its own perception module, then we have an infinite regress; if there is some explanation of how it does the perceiving, then I never needed the perception module, as that explanation just as well may be applied to me.

It’s like positing God as the creator of the universe (or, to keep it naturalistic, some advanced alien civilization in ‘the next universe up’, creating ours either by simulation or by unknown physical means): nothing has been gained due to that explanation, as it fails to explain the origin of God/the next universe up, and if there is an explanation for that, we can just as well apply it to our own universe.

That’s why the homunculus, or any sort of perception module, either must be redundant in the explanation, or else renders it circular. In your account, there are legions of little homunculi—each 0.05% conscious subunit. In relying on their consciousness to give an account of consciousness, it doesn’t actually do any work regarding the question of how matter can become conscious.

Well, but nevertheless, when I ask you how human consciousness comes about, you point to smaller sub-consciousnesses it’s (by unknown means) composed of. Don’t you see that this road of explanation doesn’t lead anywhere?

That’s a by far too hasty, and too broad, conclusion. It merely means that consciousness can’t be a thing that has proper parts, that you can’t break down conscious experience neatly into sub-experiences it’s composed of. But it could be a threshold phenomenon, for example: after some critical stage is reached, consciousness emerges.

But in doing so, you’re coming dangerously close to ‘it can’t be because it mustn’t be’-reasoning: since you believe that only a model of the kind you propose can allow consciousness to emerge from matter (which I strongly disagree with), and you believe that in order to be a sensible explanation, an explanation for consciousness must originate in material properties, you disregard the problems of your account, and blind yourself to possible alternatives.

This is, again, a very bold allegation that I don’t think many would agree with (I certainly don’t). All of the work of those feedback loops can be carried out entirely ‘in the dark’, so to speak. There doesn’t seem to be any reason—and you certainly haven’t given me one—to believe that just because the whole thing becomes complex enough, consciousness comes about.

But we’ve drifted from the question of being more or less conscious of something. Again, the photocell is only sensitive to absolute brightness, and its output is simply a bivalent alternative—‘light’ versus ‘dark’. Suppose that this enters conscious perception. What would it mean to be more aware of this? To me, it seems that I can either be aware of that distinction—there can be something it is like for me to know that it’s light (or dark). Or not. Where does the gradation come in? Does some hypothetical hyperaware being have a brighter ‘light’ experience? How is one only dimly aware of the difference between 0 and 1?

No, that’s completely wrong, I’m afraid. Penrose proposes a modification of the mathematical formalism of quantum mechanics in order to incorporate a nonlinear tern due to gravitational influences, causing ‘superposed’ states to spontaneously collapse. This, he claims, can be utilized to perform computations no ordinary computer can perform, allowing physical hypercomputation. Thus, machines can be built with capacities that exceed those of an ordinary computer, of any Turing machine, in fact. He believes that these capacities are necessary for consciousness. But the account itself is entirely physicalist—it just rests on speculative physics. There’s nothing magical about it (though plenty about it is highly implausible).

On Penrose’s model, because it doesn’t implement the right sort of process, and could not even simulate it.

Well, proof is always hard to come about in philosophy, but there are many strong arguments that require careful consideration, I believe. By simply shrugging them of—declaring that it can’t be that way because it mustn’t be that way—I think it’s you who’s committing the greater fallacy.

Yes, of course, because you have firsthand experience of the fact that there’s something it’s like for you to have certain experiences, and you know that Penrose is a being broadly similar to you; thus, it’s simply the most parsimonious hypothesis that his feelings are just as real. The argument does not hold for, say, a computer, because you do not know that the computer is a being broadly similar to you.

‘Qualia’ just means ‘subjective experiences’ on the broadest reading. It might be that there is nothing uniquely picked out by the term, but I think denying the reality of subjective experience is as close to absurdity as it gets.

Nevertheless, it’s a logically consistent possibility, and for that reason alone one worth considering—it’s not like we have a plethora of logically consistent solutions to the problem at hand from which we can just pick and choose.

I can’t read this without interpreting it as a denial of the localization of visual perception in the visual cortex. You seem to be denying that the brain assigns certain tasks to certain areas. You are demanding an “all or nothing” approach to perception, and rejecting the notion of various parts of the brain performing various analyses of sensory data.

If you deny reductionism, then you are stuck with mysticism and magical soul thinking. What third option do you have?

It provides a model that can be falsified. It opens up avenues for real scientific exploration. We can – and we do! – look for localized pre-processing of sensory data in the brain. And…we’ve found it! My model is supported by the very existence of the visual cortex. Your model would deny that the visual cortex exists at all!

This is why I wouldn’t use the word “homunculus” for the visual cortex. It isn’t a “person.” It doesn’t “perceive reality” in the holistic way a person does. It takes in sensory data, pre-processes it, recognizes shapes, and “sends a report to the committee.” It provides some of the overall functions of consciousness, and, in my analogy, it is thus “partially conscious.” It is a component of the overall process.

Again, without this, you have only an “all or nothing” model. But we know that’s wrong, given how much of the brain can be lost to strokes or injuries and yet still function. The whole can still function with large chunks missing.

Nope; I think it’s perfectly valid reductionism. When learning about ecology, we can start with a reduced model – a small grove of oak trees. We don’t have to explain the entire continental ecology: we can study a much smaller part of it, and figure out how that works. To demand, “You can’t understand ecology until you understand the entire earth’s biosphere” is not completely correct. We can understand that one oak grove, and then work outward.

Since I disagree with what you claim is my conclusion, I’m a bit stymied for how to respond. I believe that consciousness can (and is!) a thing that has proper parts.

I agree that the other model is possible. There might be such a threshold phenomenon. But if so, we should fairly easily be able to identify it by gradual paring away of a brain, until there is a very sudden cut-off of function. (Ick. Vivisection. But that’s what strokes and injuries do for us, so we can morally capitalize upon the opportunities they offer us.)

The fact that brain injuries leave one person blind, another deaf, another with the loss of the conception of “left and right,” and another unable to perceive the passage of time, all supports (well, I think so, anyway) my view of brain function as the sum of a lot of component parts.

I disagree with your characterization of my stance. This paragraph is nothing but insulting. I may very well be wrong. I have put forward a model I think is right. I have never “blinded myself to possible alternatives.”

I may very well be wrong. You will never, EVER, in anything I’ve said here, find me saying otherwise.

That’s a much better way of phrasing it. Thank you.

I think modularity (parallel processing and whatnot) is indeed a superior method for building a conscious mind and it’s probably the method best used to develop superior AI. However, I don’t believe organic minds evolved in a modular fashion—natural selection isn’t that good or efficient. Biological evolution usually results in making things just good enough to get the job done, not the best they can be. I’m doubtful that evolution could select for a modular mind; I’m more doubtful that even if it could, it would—lowest rung and all that.

Like me, I’m sure you can think of plenty of ways to improve upon human form and function, and with genetic engineering now at hand; some of these things could one day become reality. Modularity of mind may be one improvement we could engineer genetically. I’d also like to see functional helicopter propellers genetically engineered to grow atop our heads (it can be done; it just takes a lot of bone, cartilage, synovial fluid, muscle, ATP and Gummi Bear fuel).

It’s not that modular design is unheard of in biology. Cells are certainly modular, and I suppose sponges, but I believe the more complex a system becomes, the less likely evolution is to select for modularity. I can’t imagine a sponge making the transcendent leap to consciousness…no matter how much knowledge it “absorbs.” :smiley:

If you accept (correctly :)) the premise that higher order consciousness (awareness of awareness) is an emergent property of, and is supervenient upon lower order thought, I believe it’s easy to see it isn’t modular. Lower order thought? A stretch, but maybe. Higher order thought? No way.

But…isn’t the visual cortex just that kind of module?

Carl Sagan treats this in “The Dragons of Eden.” Some of the evolutionary modularity of the mammalian brain is in “overlays.” You have the hindbrain, dealing with raw emotions and needs, then the midbrain, dealing with some more complex behavioral patterns, and finally the forebrain (I think I’m using these terms more colloquially than literally; I apologize for informality) where abstract reasoning, imagination, puzzle-solving, and other nifty things happen.

So, as I see it, there are two forms of modularity that definitely have evolved. One is localization of specialized functions, and the other is the building up of the multi-part brain by accretion of new physical structures with new assigned purposes.

(Also using the word “purpose” loosely, knowing that, in evolution, there’s no such thing. Stephen Jay Gould has an essay defending the informal use of such language.)

Well, I admit that it’s a grey area and largely dependent on what frame of reference you’re coming from (biology, microbiology, physics, neurology, neurophysiology neuro-anatomy, psychology…butcher, baker, candlestick maker), but no, I don’t believe the visual cortex should be considered modular in this discussion. In much the same way that creationists can’t claim that eyeballs could not have evolved without God.

Eyesight evolved in similar fashion to all other biological systems—from something less specialized, to something more specialized. Do you consider the nervous (CNS/peripheral motor/sensory/sympathetic…), vascular (heart, lungs, arteries, arterioles, capillaries, veins…), alimentary (mouth to poop shoot), endocrine (lots of slimy glands), dermatological (epidermis, dermis, subQ…), reproductive (all those naughty bits you’ve leered at in men’s magazines since you were 5yo) systems to be modular? From some perspectives they can very well be thought of as modular. Each evolved in quite specialized fashion to perform very specialized functions. They each certainly appear modular on the surface. But, are they modular in the “philosophy of the mind” perspective?

In our current discussion of modularity with regard to parallel processing of consciousness, I don’t consider those biological systems to be modular by comparison. To me, modularity, in the parallel processing sense, implies plug & play independence. Add a module or two to the system and it becomes more…robust. If a module fails, just replace it with another and no diminishment is felt. You can’t do that with the major biological systems in your body. Try to maintain the alimentary canal independent of the vascular system?…you will end up with a fetid, slimy tub of goo; I speak from personal experience (did I ever mention the boyhood basement experiments I did with my dearly departed nana?).

On re-read, Trinopus, I see that you mention modular overlap of mind (might you limit this to overlap of lower order consciousness?)…so perhaps we’re on the same page after all. I hope so, because (don’t blush), I like your style and wish to be on your side of the argument (ya big lug). In fact, if you are currently using your brain modularly, I will henceforth refer to you as “Team Trinopus.”

Well, now you’ve got me; I don’t really have a good way to answer this. Out of my depth, I guess. (“Butcher” probably comes closest to my approach to mind/brain issues, given the number of times I’ve referenced passive vivisection… People simply need to have more severe strokes…) (I hasten to add, in case it isn’t clear, that’s just a horrible joke, not a sincere opinion; people in my family have had a lot of strokes. Statistically, I’m likely myself…)

Er…some yes, some no? Just as one coarse example, the ribs and vertebrae are clearly “modular” in structure. But how about the skull? I’m not sure… Or how about the chambers of the heart? One job has been broken up into four parts…

Ah! I like that! We have two eyes, two lungs, two kidneys, etc., and so our systems are modular, in that way.

I confess, this isn’t how I was imagining the word. My connotation was of “dispersed” functioning by parts. Human digestion is “modular” in my sense, because the stomach does some of it, then passes it on to the intestines, while the liver and gall bladder have their effects, etc. The job is broken up into sub-jobs.

That’s what I think human consciousness and human intelligence are doing. Pieces of the overall task are sub-let out to sub-contractors, each of whom contribute something to the final product.

(Lordy, what a wonderful language we have, where words have such delightfully fine gradations of connotation!)

Should I ask? Dare I ask?

(I’ve never dissected anything…but I do have a lovely collection of skulls which nature has cleaned for me, including a hoss I used to ride, and a doggie I used to play fetch with. And a cow I likely ate steaks off’n.)

You’re very kind. You’re a gentleman (oops, unless you’re a lady) and you’ve got a cool login name too! And I want to emphasize how much I admire Half Man Half Wit. He’s so very much better educated than I am, and I know I’m out of my depth arguing with him. I want anything other than to be the dumb kid who refutes Relativity with a “race car on a train” argument, or the dolt who says, “…Then why are there still monkeys?” My views may very well be wrong, and, worse, they might be naïve, shallow, callow, inchoate, and full of processed carbs.

We just gotta do the best we can!

One of the benefits of our viewpoint is that it allows us to use the “Royal we.” We are amused! :smiley:

Then you’re misunderstanding the issue. I deny that there is a representation of the outside world in the brain, which is in any sense of the word internally perceived, because assuming that that’s how it works leads to logical inconsistency.

Numerous options can be found in the literature. First of all, holism doesn’t necessarily imply mysticism: quantum mechanics is, on most straightforward interpretations, a holistic theory—if you have a system composed of two (or more) subsystems, knowing the state of both subsystems does not in general tell you the state of the complete system. Attempting to reduce the whole to its parts will lead to a theory observationally different from quantum mechanics (and shown to be wrong). There’s no logical obstacle to consciousness being like that, such that if we attempt an explanation in terms of ‘proper parts’, we miss what makes consciousness consciousness, even though it’s perfectly naturalistic.

Other options that have been suggested are panpsychism (as already mentioned), dual-aspect theories, neutral monism, Russellian monism… I won’t go into descriptions of each, but all are compatible with a rationalistic worldview (at least in some variants), without consciousness being necessarily reduced to matter in any real sense. There’s really a wealth of options.

In what sense does it even provide a model? What question is being answered? There is no explanation for consciousness if you depend on conscious subunits to create consciousness!

Not in the least (and I don’t see what made you conclude this). But, for instance, we already know that the neurons in the visual cortex—specifically in V1 or Brodmann’s area 17—do not actually play a role in conscious perception: even though they fire, indicating the presence of a stimulus that they react to (say, a straight line in their visual field), that stimulus is not necessarily present in the conscious mind. Christof Koch, a leading figure in the search of the ‘neural correlates of consciousness’, argues very persuasively in his book The Quest for Consciousness that we can exclude those neurons from consideration as being directly responsible for conscious experience. So even though there is processing of visual data in those neurons, doesn’t imply that they have anything to do with consciousness.

First of all, ‘homunculus’ does not imply any sort of personal attributes, holistic perception, or anything like that. It merely implies the functioning of a particular module in the mind as an internal ‘perceiver’, something that ‘looks at’ the representation of the world in the mind—and then maybe abstracts some report for a committee, or what have you. If you wish to solve the problem of consciousness by introducing some such entity that has some degree of consciousness, then you simply have not done any work towards an explanation of consciousness—because think about it, how does the module achieve its fractional consciousness? Does it have its own internal committee? If not, consciousness without committee is possible; if yes, then you have an explanation that includes a vicious regress.

But if you manipulate the claustrum, consciousness apparently spontaneously vanishes.

Well, of course, but only in order to explain the ecology in that small grove—but you’re not explaining it, merely positing it as building blocks of the larger ecology. This is why I keep asking you to explain how those 0.05% consciousness—in microprocessors, in homunculi, anywhere—come about!

The conclusion I spoke of was yours: that matter could never be conscious, if my arguments were right. But that’s not the case. You assert that consciousness must have proper parts, because you believe that otherwise, no rationalistic explanation of consciousness were possible. But I think that’s wrong.

As in the above discovery of the claustrum as seemingly being the orchestrator of consciousness.

I’m not arguing that brain function is not compartmentalized. But this does not imply that consciousness is likewise.

Well, your argument seems to be that you can’t imagine a possibility for a rationalistic explanation along any other lines, and hence, what I’m saying must be wrong. Take the argument against mind dust due to William James: you don’t seem to have any counterargument, but believe that the conclusion must be wrong, anyway, because you can’t see how anything else might work. I, on the other hand, think that these arguments can tell us a lot about possible solutions to the problems of consciousness, if they’re taken seriously, something which you refuse to do, even though you have no rational ground to do so. Pointing this out is not an insult (and certainly not intended as one), it’s merely highlighting a flaw I see in your argumentation.

Nevertheless, your confidence suffices to flat-out assert several quite striking things, such as that microprocessors are conscious, as are ants, that consciousness must somehow arise out of less conscious parts, and so on. As far as I can see, you have not provided an argument for how any of these things might work, beyond the fact that you can’t imagine anything else that satisfies your preconceptions. For me, this is simply not a good enough reason to believe any of these things, and indeed, there are good arguments for not believing them, which you summarily dismiss.

If you remove the spark plugs, the engine won’t run.

This is wrong. I do not “summarily dismiss” anything, and the claim that “I can’t imagine anything else” is offensive.

I admire you greatly, but you’re resorting to some nasty behavior here.

I have admitted from the first that I can’t explain how the building blocks of consciousness add up to the whole. I have not claimed that microprocessors are conscious.

You’re engaging in distortion, and I don’t understand why. You’re better than this.

OK, I seem to have hit some nerve, and for that, I apologize. However, I won’t have the claim that I’m engaging in distortion stand; everything I said comes from what you posted. Here’s where I get what I wrote from:

Yet whenever I bring up some argument such as the William James one, you do not see fit to engage it in any way, rather, you simply disregard its conclusion, based on your belief that only a model of the kind you propose can ‘allow matter to be conscious’:

It’s a claim you have made repeatedly, e.g.:

This certainly implies that you can’t imagine any other model.

Then how should I understand this?

For some reason, you’ve chosen to be personally offended by what I wrote. Again, it was not my intention, and I’m sorry if I was too harsh; but nothing I have said, I said in bad faith, but rather, as an honest evaluation of the arguments you’ve put forward. If you find fault with that, then by all means, demonstrate otherwise; but don’t claim I ‘distort’ things when that is manifestly not the case.

A microprocessor is a deterministic system that is programmed by an external source. It’s architecture is fixed to accommodate a set of software definitions (or vice versa). The microprocessor is strictly logical.

Neural nets are a special case of microprocessors and computer software. Neural nets are modeled after the corresponding components of the brain. Neural nets are not logical, they must be trained. Training consists of presenting the system with a set of sensory inputs along with the desired output. After a period of training the computer will respond with the desired output for any given set of inputs. The advantage is that the computer will also generalize outputs for input conditions it has never ‘seen’. Some automotive systems work this way, you simply teach the computer to drive.

The brain is a mechanically connected, electro-chemical system that is self programmed. Well, at least in the sense that many of the mechanical connections grow in response to it’s environment. The rest of the program is produced by training. Because the brain is a hybrid of analog and digital components, it is very good at generalizing outputs for unfamiliar inputs. The brain is not logical. It is a device that mimics it’s environment.

Our computer is very sensitive to chemical input. The body can control it with hormones. The anesthesiologist can turn it off harmlessly. Various drugs can alter it’s function.

Consciousness is mystical because we experience it without being able to explain how it works. However, we know the process. We are what we have learned plus some ability to generalize. The consciousness that we experience is simply the output device for our internal computer.

Crane

‘Output’ again implies that there’s someone/something there to read it; but of course, that picture just leads to circularity.

And there’s a consideration that we don’t usually take into account, but which becomes important for this kind of model: what input and output of some computation are interpreted as is essentially arbitrary. We don’t notice that, because our computers are set up such that their output is immediately appreciable to us, but that just means that the output has some specific meaning relatively to the particular way that human brains are set up—thus, what is being output, or more accurately, what that output is interpreted as, depends on the human user.

But outputs could, in principle, interpreted in very different ways. Think simply of a display in a different language—to you, the computer produces gibberish; to a speaker of that language, it implements a recognizable computation. If you were to learn their language, then suddenly, the computer’s output becomes meaningful to you, as well; you’ve learned what is sometimes called the implementation function that connects physical states of some system with functional states of some computation.

However, this implementation function is arbitrary, meaning that every system whose states can be mapped to some computation in any way can be seen to perform any given computation, just under a change of implementation function. Putnam has summarized this as ‘a rock implements every finite state automaton’, while Searle has used the example of his wall running the Word Star program.

But then, if indeed the consciousness we experience is simply the output of our internal computer, who decides on the implementation function? Or, asked the other way around, if consciousness simply is a computation, and I can, with the right implementation function, view any rock as implementing this computation, then why is a rock not conscious?

Well, let’s drop this, as it isn’t productive. I don’t want this discussion to be about us, but, instead, about ideas. I apologize for mistaking what you said and how you meant it.

One quick note: I don’t answer the entirety of many of your posts…because if that happened, we’d have exponential growth of overall post length!

Many times, I refrain from commenting for purposes of brevity alone! In many other cases, I don’t know how to respond. I don’t have anything useful to say. I didn’t see any use in cluttering up threads with repetitious “I don’t know” comments.

I honestly don’t know! William James is beyond my competence. Neuroscience is beyond my competence.

I’m not a scientist or even a scholar, save at the most amateur level. I have some ideas, and I don’t think they’re stupid or foolish. They may well be wrong. That’s cool! Wrongness isn’t a moral failure in these areas.

Now: I think that consciousness is a “distributed” process, for, among other reasons, the three layers of the mammalian brain, as I mentioned in the post to Tibby Or Not Tibby: the idea I first encountered in Carl Sagan’s “The Dragons of Eden.” The mammalian brain is built up of overlaid layers, which I loosely called hindbrain, midbrain, and forebrain. This is why, among other things, we, as mammals, sometimes do things we don’t really want to do, like lose our tempers or make foolish mistakes when we’re really really horny.

That our minds are made up of these constituent elements or components is part of why I think consciousness can be reduced to lesser parts, which communicate and cooperate, working together to make up our minds.

If I could prove it, I’d be in line for a Nobel Prize! Since this is just an informal discussion board, where we talk about stuff like Israel/Gaza and ketchup on hot dogs, I get myself stuck in. Opinions are like armpits; I have a couple of 'em!

I wouldn’t have used the word “mystical” because it has connotations that I don’t agree with. But I do agree with the rest of this paragraph.

Really, the only thing I’ve been saying, and which Half Man Half Wit disagrees (ETA: I think he disagrees; I should not speak for him here) is that I think the “computer” that is our mind might be made up of modules, or elements, or semi-independent agents – something like large subroutines in conventional programming. These might correspond to physical regions of the brain – the visual cortex, e.g. – or they might be functional divisions that aren’t necessarily physically isolated.

The fact that people suffer very specific losses of capability from strokes and brain injuries is, in my opinion, support for this idea of localization/modularity/compartmentalization/substructure. A guy has a stroke and can no longer transfer memories from short-term memory to long-term memory. Another guy has a stroke and loses the abstract concept of “left and right.” He can’t eat the peas on the left side of his plate; he can’t even conceive of them.

Our brains have a kind of discrete nature, with a chunk here, a chunk there, here a chunk, there a chunk, everywhere a chunk chunk.

My thought is that since it’s obviously true of our brains, it might very well be true of our consciousness.

This idea may very well be wrong, but it has at least a little support from the evidence.

What we call consciousness is one set of our learned responses to sensory input. It does not require an outside interpreter. It is simply how the internal computer works. The external output is in speech, expression, text etc… It is not an absolute. In different times and places and individuals it is very different.

The response of a rock to it’s molecular history is it’s physical structure at the time and temperature it is observed. It responds to stimulus only with it’s physical characteristics. You may be clever enough to create an electrochemical neuronal system whose output is the physical structure of rocks. If so, I admire your effort, but it has no significance in this discussion.

The autonomic nervous system handles sensory input that is minimally observed by the consciousness.

Crane

I would say, instead, that it is “hard-wired” and instinctive, not “learned.”

(It is “formative,” to be sure, and does require environmental stimuli. A baby that is raised in total sensory deprivation – a hellish idea! – would quite possibly never become conscious.)

(I apologize for the fact that so many of my thought-experiments are so very grisly and morbid.)