How did the universe and consciousness create themselves from nothing?

Agreed that it appears to be valuable.

Probably just a terminology thing. When I think of side-effect, I’m thinking that term describes something that isn’t influencing the behavior. Meaning that if it is used for survival then it is no longer just a side-effect, it is an integral part of the functionality.

It could initially arise as a side-effect/happy accident, but the minute it’s a valuable part of the functioning of the system, then I would probably classify it differently.

The mechanism is exactly what makes it a difficult problem.

We understand how to build things with cause and effect, even complex things. But in all of those cases, the connection between the cause and the effect is simply physics, it’s really obvious and easy at the lowest level, even when the system is complex.

We understand how transistors work and how we can build up enormous levels of complexity based on those simple elements.

We also are gaining understanding about how the brains cells and electro-chemical stuff works. We can see the filtering and pattern matching to support things like the visual or auditory system. we can see the configuration of neurons and how they help with spatial navigation. We can build those exact things.

But we don’t have the slightest clue about where the function of consciousness is calculated and what mechanism allows that result to be incorporated into the next state or to influence actions. Saying it’s an emergent property isn’t really an answer unless you can show mathematically how you get that emergent property.

I agree that we seem to be clueless about “how it works”, but I think that’s at least partly due to the fact that we can’t even agree on a good functional definition of what “consciousness” is – it’s a very ill-defined problem. However, I disagree with dismissing the “emergent property” answer as not being meaningful. Half Man Half Wit made exactly the same argument here:

Again, while expressing my respect for Half Man Half Wit from whose patient and informative posts I’ve learned a lot about quantum mechanics and other things, here I completely disagree. Exactly the same objection can be applied to intelligence. I could say, don’t tell me that it’s an “emergent property”; that’s meaningless; I want to know exactly how it works, and where “intelligence” is calculated in an intelligent system. Well, you can’t. You can’t precisely because it’s an emergent property, the synergy of its components.

One can take the example of a simple electronic calculator. It’s clearly not intelligent. But take a very large quantity of the exact same components and build a stored-program computer of sufficient power, load it with sufficiently capable software – say, IBM’s DeepQA, the basis of the Watson QA system – and suddenly you have something that is clearly demonstrating intelligence, an emergent property.

To be sure, unlike the problem of consciousness, Watson can be described in terms of its components and data flows – natural language parsing, extracting the query semantics, generating hypotheses, search and evidence retrieval, confidence scoring, and so on. All of these are separate components, running on multiple separate processors with the system overall employing absolutely massive physical parallelism. So where is that property of intelligence “located”? Where is it “calculated”?

To paraphrase a familiar expression, it is calculated nowhere, and at the same time, in some sense it is calculated everywhere. The question isn’t really meaningful.

Of course some will argue that this isn’t “true” intelligence because … well, as far as I can tell, the argument is that if we understand more or less how it works, then it’s not “true” intelligence but merely mimics it in some way that is dismissed as merely “mechanistic”. This is of course nonsense. Our brain is no less “mechanistic”, only mechanistically different. It’s not intrinsic to intelligence that it must be a mystery. It has to be defined in functional, behavioral terms or it’s meaningless.

I think it’s instructive to consider the above argument while substituting “consciousness” for “intelligence”. Indeed just as intelligence is an emergent property of an information processing system once some threshold of complexity is exceeded, consciousness is just an emergent property of a sufficient level of intelligence, and exists as a continuum across a spectrum of intelligence across the animal kingdom, though not yet in artificial form. It’s a purely subjective perception of self-awareness. I fail to see the dilemma here. To ask “why” we have it in the context of evolution is a moot question, in my view. We possess consciousness simply because we have a sufficient level of intelligence to have it, because we possess, amongst our cognitive processes, the ability to think and engage in higher level reasoning. It’s hard to imagine how one can have arbitrarily powerful reasoning abilities and not become self-aware. Even my dog demonstrates self-awareness in many interesting ways.

There’s a limit to human “intelligence” - maybe our brains just aren’t able to understand the “why”. Maybe there is no “why,” and maybe the Universe never gave a shit about why; maybe it is just so.

It’s an advanced state, a response to stimuli in the environment. Creatures have been responding to environmental stimuli for almost 4 billion years, not that long after our planet was born.

And, respectfully, I disagree with you and agree with Half Man Half Wit :smiley:

The most important part of science IMO is that we test our models by making predictions / inferences.
I wish it was emphasized more: when experts talk about the scientific method they often go into a discussion on peer review, say, when a lot of ignorance would be fought if more people tested their ideas and realized the limits of what they know.

Now, I am not accusing you wolfpup of not understanding this principle.

But we can apply it to phenomena like intelligence and consciousness.

What predictions and inferences can we make from our understanding of intelligence? Well, lots. We have a good formal understanding of everything from game strategy to locomotion to linguistics and we can use this understanding to make machines / write programs that can perform useful actions.
In some cases some AI have emergent behaviour that we cannot fully explain – and it’s a legitimate area of science to try to understand how such behaviour emerged.

Now let’s think about consciousness.
I’m a software engineer with neuroscience background. But if you ask me to make a system that would have subjective experience I don’t know where to start. I don’t know what that would be, or how to test whether I was successful.
And it’s not like I can make a system that’s a little conscious, and we just need to scale it up to make a human mind…I have no reason to believe I can engineer something with *any *degree of subjective experience.

Now, in terms of studying humans of course there are plenty of aspects of consciousness that we understand very well. As a trivial example, I can draw an optical illusion based on our understanding of how the brain parses visual data into our subjective experience of vision.
But there are plenty of things that the human brain does that we have no model for at all. And that’s fine; more research is needed. And it should be OK to also point out that status.

And yet for all other examples of emergent properties that I am aware of, we can understand and simulate the emergent properties by following the lower level details and rules that combine to create the property at the next level up.

Examples I see are things like flocking, ant colonies, water. All of which can be duplicated/simulated by following the same low level rules that give rise to the higher level properties. We can describe the results of our simulations to see if they match up with physical reality. We can model all of it with math to confirm our results.

If we can explain consciousness as an emergent property, then we should be able to duplicate/simulate it using those low level rules that give rise to it. But, on the surface, the low level rules surrounding neurons, neurotransmitters, electrical signals and fields, etc. don’t seem to contain any special sauce that will get us to consciousness.

Not only that, but we don’t even have a way of describing consciousness mathematically like we can with pretty much every other thing we try to model. We can tell if the water simulation is behaving correctly with all of it’s emergent properties because we can describe those properties. We can’t do the same with consciousness.
You might counter with something like the following:
“Consciousness is still an emergent property, but it only arises from very specific arrangements of the chemical+electrical gradients over time, and we don’t know what the rules are to duplicate those very specific arrangements.”

And that argument might be completely correct, but it doesn’t move the needle at all from consciousness being the “hard problem” to being “a difficult problem but we can at least see a path.”

My personal opinion is that the “very specific arrangements of chemical+electrical gradient” argument seems like it could be correct primarily because changes to our system (e.g. drugs, damage, sleep) impact the the chemical+electrical landscape, and at the same time impacts consciousness. If that is true, then the next question is “what is special about these specific states of energy and matter that results in this interesting and we assume valuable property?”

I appreciate your feedback, and I agree with you that we have a sufficient understanding of certain specific skills and behaviors associated with intelligence that we can build artificial computational versions of them, but that’s not at all the same as understanding intelligence generically.

Let’s go back to that DeepQA/Watson example again, because I think you’re avoiding my challenge. Do we know how to “build intelligence”? No, we do not. We know how to build systems with certain specific capabilities which might, if properly architected, have the potential – in the aggregate – of intelligent behavior in some particular domain when all the components work together in synergy. Moreover, sometimes the only way to do this is to build the components with the ability to learn so that the primitive capabilities they were engineered with can be enhanced with extensive training.

So another pertinent aspect of Watson here, in addition to the ones I already mentioned, is that very extensive training was done to build its expertise in this one specific problem domain (playing Jeopardy). The product spin-offs of Watson receive training in different problem domains and exhibit a completely different skill set and range of competencies using the same base technologies.

Your statement that “we can use this understanding [of how humans do useful things] to make machines / write programs that can perform useful actions” is not wrong, but it understates the central role of the emergent-property phenomenon. If we “know how” to build an intelligent system, then one of the most salient questions here is, if so, how come the performance of such a system is completely unpredictable until the fully trained and assembled system is tried out in its entirety? And as I asked before, where is this intelligence located? In its algorithms? Its heuristics? The tables holding its training experiences? In which of the logical components of DeepQA, and in which of its thousands of parallel processors, is this intelligence being manifested?

The point I’m trying to make is that the fact that intelligence is an emergent property of computation is not merely an interesting effect in some random system, but the fundamental way that it’s achieved in all high-performance AI. My contention – which is necessarily “unscientific” and purely speculative since we can’t even all agree on how to define consciousness – is that consciousness not just an emergent property of intelligence, but inevitable once some threshold is reached.

If you’d asked Charles Babbage or Ada Lovelace to build an intelligent machine, they wouldn’t have known where to start, either. As a matter of fact Lovelace dismissed the possibility, saying that “The Analytical Engine has no pretensions whatever to originate anything. It can do whatever we know how to order it to perform. It can follow analysis; but it has no power of anticipating any analytical relations or truths.” IOW, the original version of “computers can only do what they’re programmed to do”.

This misconception still exists today, though maybe not as strongly as it used to. It was Alan Turing who first turned this simplistic nonsense around. And in that connection, when you say that you don’t know how you would empirically test whether your AI possesses consciousness, I think Turing already gave us the answer: the Turing test. Successfully passing a full-fledged thorough Turing test establishes the presence of a general-purpose human-like intelligence, which in my view necessarily implies consciousness, else we would not have the behaviors that we do.

That has nothing to do with our understanding of consciousness but is just a reflection of our understanding of the role of the visual cortex. Specifically, our mental image of that same drawing does not exhibit the same illusion. We can easily remember all the details of a simple drawing like the Muller-Lyer illusion, but until we transcribe that memory to paper the illusion is not present. That fact has been used in support of the syntactic-representational argument for the computational theory of mind, which itself argues for a computational basis for at least some of the major cognitive phenomena, including (I believe) consciousness.

But you haven’t really provided any arguments to try and dismiss the way in which I argued that yes, consciousness is different from those other problems—notably, you have neither provided an example of knowledge that can’t be shared and yet, isn’t experiential, nor have you given any indication about how experiential knowledge might be shared (other than by creating experiences) after all. Yet, you’re still confident that either this can be done, or it doesn’t really matter.

Again, this is something I’ve asked about before: if it’s so easy to see what kind of behavior is specific to conscious entities, then it ought to be possible to give some example, at least.

If experience can be divorced from behavior, and evolution selects only based on behavior, evolution can’t select for fitting conscious experiences. There would be no selective difference between me stubbing my toe and being in pain, versus me stubbing my toe and being in bliss, as long as my behavior remains the same (retracting my foot, screaming ‘ow’ and the like). Hence, evolution can’t select appropriate experiences to go with our circumstances.

You’re saying two things here: 1. there’s nothing a conscious entity can do that a non-conscious one can’t, 2. here is something a conscious entity can do that a non-conscious one can’t. The latter, of course, is just the familiar assertion that consciousness will come along once you bundle up enough complexity… somehow.

Glad you cleared that up! I would point out that there are many arguments against that, but then, that sort of thing just doesn’t really seem to be something that bothers you all that much.

Again, not sure what you mean. Thinking, too, can happen entirely unconsciously, or at least that seems to be a widespread assumption.

In the same way I never have pain while I’m asleep, yes. That’s the whole reason for anesthesia! The itch is my awareness of a certain stimulus.

Furthermore, the idea that there are sensations, and separately observations of sensations, leads to the homunculus fallacy: if for me to perceive something, something within me needs to perceive that perceiving, then how does that thing do its perceiving? It’s either redundant, or ends up in an infinite regress.

I become conscious of my need to eat; that’s what we call ‘hunger’.

I had hoped that it was obvious I was speaking metaphorically. Evolution needs to be able to discern between conscious and non-conscious entities, and moreover, needs to be able to select for behaviors with appropriate experiential component. For this, a behavioral difference between conscious and non-conscious agents is necessary. If such a difference exists, then it should not be the case that differentiating between conscious and non-conscious is such a ‘known difficult problem’, as you claim it is.

This is it. This is the problem: acting as if would be enough; whether we’re just acting as if, or actually are conscious, evolution doesn’t care (again, I know, evolution doesn’t ‘care’ about anything, but it’s a common enough metaphorical device in biology that I trust you won’t be bothered too much).

In what sense? When is a system ‘closer’ to being conscious?

Take the whistle of a steam train: it’s a side effect of its operation, but does not impact its performance (its behavior). Philosophers call something like that an epiphenomenon: something that’s just along for the ride, but does not itself have any causal impact. Epiphenomenalism is a relatively widely considered view of subjective experience; roughly, it’s the view that our behavior is fully determined by the physico-chemical reactions occurring at the fundamental level, with conscious experience not having any causal powers in itself. Hence, if I stub my toe and scream, I do so because of the electrochemical signals traveling down the pathways of my brain, not because I feel pain; I might equally well feel bliss.

Now, even if you’re saying that somehow all this signal processing just is consciousness, even though there are many examples of signal processing that we don’t typically take to be conscious, you’d still have to explain how it is that they feel appropriately. If it’s not the fact that pain feels bad that makes me scream, then evolution can’t select for how bad pain feels.

There are two important differences here. One, intelligence can pretty obviously be explicated in functional terms: intelligent is what acts intelligently. Two, the components of an intelligent system have non-trivial partial aspects of the function of intelligent action.

Thus, we have components that show behavior that, on their own, mimic certain aspects of the behavior of intelligent systems—a calculator calculates, just as an intelligent being calculates. Hence, there is absolutely no mystery in how we can bundle up such ‘partially intelligent’ systems to achieve ever greater fractions of intelligent behavior, and since intelligent behavior is all there is to intelligence, we’re done.

Contrast this with the issue of consciousness. Most people agree that the parts we’re trying to build a conscious system from don’t show some fraction of conscious experience (panpsychists here being the obvious exception). Furthermore, most people agree that consciousness is not exhausted in behavior: even though a simple robot might behave like I do in stubbing my toe, it does not thereby follow that it has an experience of pain.

Consequently, there is merit to the assertion that we can bundle up partially-intelligent bits to achieve an intelligent whole; but the assertion that we can bundle up non-conscious elements and have consciousness just somehow ‘spark up’ eventually is, without further justification, just an article of faith.

Yes, we do. We can make intelligent systems. I can make you a system that will learn the best strategy for tic-tac-toe, and I can understand how that system operates at all levels.

I think possibly you’re confusing intelligence with *general *intelligence, and it’s important we are very clear on the distinction here.

We understand a lot about intelligence and that’s why we can make systems that make intelligent decisions, sometimes even decisions a human could not have made (e.g. the best chess move).
But we have not made a generalized intelligence yet.

Actually these are valid questions (well, perhaps if phrased a different way).
We want to understand intelligence at all levels: the building blocks and how deep learning generates novel strategies. There’s no “handwave because complexity”.

Then for consciousness; for some facets we understand neither the building blocks nor the larger-scale.
Yes we can say we understand neurons pretty well, but if that level of reduction is OK then why not just say we understand atoms therefore we understand consciousness? Our level of understanding should always be measured by what predictions and inferences we can make.

And those statements are right. She didn’t say no computer will ever be intelligent.
I am not saying no machine will ever be conscious. Nor that we will never understand consciousness (I thoroughly expect us to). But I see no reason at this time to assume we’ve already made a *little *consciousness.

There are many issues with the turing test, many of which Turing himself realized.
But passing it, yes, will be a significant milestone. I would not agree with you that it necessarily implies consciousness.

Disagree. Consider, say the optical illusion of horizontal lines not appearing level (example).
We understand a lot about how the visual cortex processes color, and edge detection and so essentially how the brain “generates” the illusion. But ultimately the illusion itself; the perception of a non-level line is something happening in my “mind’s eye”; it’s something I am perceiving, directly. Indeed it’s *only *happening in my consciousness.
So in fact optical illusions are often cited as an example of one aspect of consciousness for which we do have understanding.

Intelligence of these systems is held by the system as a whole and it’s performance is easily measured by it’s error rate (mapping of input to output compared to desired output). While the system may be complex, the measurement of it’s performance is deterministic and describable, and certain levels of performance can be considered “intelligent” (depending on the definition we agree on).

There is no such measure for consciousness. We don’t know how to describe it in concrete terms.

You are assuming that consciousness influences behavior and using that to support your argument, but we don’t really know that it influences behavior. Research seems to be pointing to more of a passenger being told a story about the surroundings as opposed to a driver taking action (e.g. subject’s choices were identified from brain activity prior to the subjects being consciously aware of their own choice).

Subjective consciousness is hard to define or understand, but we do see evidence for it. Just the paragraph I quoted implies, I think, that RaftPeople is using his subjective consciousness!

If a machine is advanced enough to sincerely utter “I think therefore I am,” then perhaps that machine has passed the consciousness test!

@ OP — Did you click the link? I think Tegmark has the answer; I was disappointed nobody in the thread agreed.

Sure, if it’s 7PM and my boss says I can’t leave until I say whether it’s conscious, then sure, it’s conscious.
But it sure would be nice if we actually had an understanding; a detailed model with testable predictions. And I think it’s logical to not hold any particular position until that time.

Once again: predictions. What testable predictions does this hypothesis make?

Tegmark hasn’t gone far enough. Math is limited by its consistency–we ignore math when it contradicts itself. But there’s no reason the universe should be as provincial as that.

Our universe has a particular set of mathematical rules, but it’s not all that obvious that it’s the only possibility. We also can only observe that it’s mostly consistent, but we’ve only checked a finite number of things and who knows what hazards lurk ahead.

I think that consistency is mostly a survival trait for universes; that they pretty much only hold together if things are pretty consistent, but that it’s by no means guaranteed. I vaguely worry that an alien civilization will find a set of distinct prime numbers where AB=CD and destroy our universe in the process. But maybe things are more robust than that, and that it’s no more a problem than an air bubble under plastic film. Maybe you can push it around, but not get rid of it, and that’s actually ok.

I say that the true universe–perhaps a “level V” in Tegmark’s ranking, contains as a single fairly-stable point the rules of our mathematics, and within that, the set of all variations on physics that it leads to, and somewhere within that our own universe. But that’s not the only point on the landscape, may not be the most interesting one, and may not even be wholly consistent. Perhaps perfect consistency is undesirable and leads to boring, empty universes–too crystalline and symmetrical to lead to the mess that leads to life in our own.

Tegmark has a wide variety of interesting musings I’ve not had time to read. I cite him because his ideas seem similar to my own views developed independently long ago.

Neither Tegmark nor I assert that our universe is “the only possibility.” To the contrary, every mathematical system is real! Given any set of axioms and initial conditions complicated enough to produce self-conscious creatures, those conscious creatures really do exist!

Does a mathematical structure need to be consistent in order to exist? Tegmark says Yes IIRC. I’m agnostic.

Too many responses for my limited time and energy to deal with fully this late, perhaps more later. But just the things I thought were most pertinent and important …

No, we cannot, and there’s more in my response below to HMHW. It’s like saying that “we can understand and simulate the emergent properties by following the lower level details” of the integrated logic gate switching circuits in an individual POWER7 processor to understand and predict that Watson would win – or at least do well – at Jeopardy. The two things are so far removed, disconnected by so many functional layers of abstraction, that this is just silly.

If “intelligent is what acts intelligently”, then surely “conscious is what acts self-aware”. There are debates about how to define either, such as the ridiculous dismissals of anything whose underlying functionality you can understand as therefore being “obviously” not real intelligence, but just a mimicry of it. As Marvin Minsky once said, “when you explain, you explain away”. It doesn’t make it any less real.

But I especially want to address your second point, “the components of an intelligent system have non-trivial partial aspects of the function of intelligent action”. They do? Surely not! No rational person would consider an electronic calculator to be “intelligent”, and hence a potential component of an intelligent automaton! That argument obfuscates the fundamental difference between a calculating device and a stored-program computer. But more subtly and importantly, a very simple stored-program computer with limited processing power and limited memory can’t do anything exceptionally interesting, while one with orders of magnitude greater capacity can start to do fundamentally new things. It can obviously just do the same old things faster. But at some performance point completely new things become possible that were never possible before. The principle here is that a sufficient quantitative change in capacity can result in a fundamental qualitative change in capability. This is not only a fundamental principle of computing, but the basis of who we are as humans on this planet.

I would counter that you may be confusing what would generally be regarded as real intelligence in the contemporary AI context with trivial gameplay. You can win tic-tac-toe using trivial algorithms. Real problems that are addressed by contemporary AI systems are not at that level, and cannot be solved by an algorithm. Indeed, if they could, they would not be considered AI.

No, the idea that “a computer can only do what it’s programmed to do” is true only in the most superficial and useless sense. This is the whole point of the “emergent property” proposition. Complexity creates its own paradigms. If that were not true, we humans with our mechanistic biological brains would be completely predictable.

Perhaps not, though personally I disagree. But I was thinking of HMHW’s question earlier about what evolutionary advantage consciousness endows. In my view this is a malformed question. Consciousness in and of itself is just a side effect of an intelligence sufficiently developed to have self-awareness. The real question – and a fascinating philosophical one – is not about consciousness, but about whether the levels of intelligence that give rise to it are advantageous from a survival standpoint. Dogs, for instance, are obviously sentient and I would argue have some level of what we would call consciousness, but not to the extent of having existential angst and worrying about their future and all the bad things that might happen. They live for the moment, and are all the happier for it.

No, this is obfuscating a couple of different issues. The nature of mental imagery has been debated for decades, and whether there is even such a thing as a “mind’s eye” is contentious, to say the least – many cognitive scientists would say it’s just nonsense. See this discussion. As I said, there is substantial evidence that mental image processing is computational and not “quasi-pictorial” and that the visual cortex has no role in it, such as the absence of the optical illusion in mental recall, and the “cognitive impenetrability” of such illusions, meaning that even when you know that the lines you’re looking at are the same length, or are actually parallel, or whatever the case may be, the illusion still persists. This is actually the exact opposite of what we typically mean by consciousness – it implies relatively low-level sensory processing rather than higher-level cognitive processes, let alone anything even approaching consciousness.

I just want to make sure I’m following this: even if there’s “no selective difference” between ‘stubbing toes and experiencing pain and reacting as if experiencing pain’ and ‘stubbing toes and reacting as if experiencing pain’, can the ‘experiencing pain’ aspect get passed on because — well, okay, it’s not getting selected for; but it’s also not getting selected against, right?

You say that consciousness-plus-a-given-behavior provides no benefits to survival or reproduction as compared to indistinguishably engaging in that same behavior. But does it provide any significant drawbacks?

That opens up a lot of questions as to what ‘sincerely utter’ might mean, though. I can program a computer to make that utterance. How can you tell whether it’s sincere?

Tegmark essentially proposes a variant of ontic structural realism (the view that the world fundamentally consists of structure, e. g. relations) with a Pythagorean bent, plus a dose of modal realism (the view that everything that’s possible is also actual).

The latter suffices to get rid of the existential question: the world exists, because it’s possible, and everything possible exists. I basically think that’s a conceptual conflation, but that’d make for an entirely new thread.

The other part, OSR, is well known to be quite problematic, most notably thanks to Newman’s objection that was raised almost immediately to Russell’s original formulation of structural realism (albeit in the epistemic sense, i. e. as claiming that all we can know about the world is its structure): basically, structure radically underdetermines the world—so much so that if all we know about a given area is its structure, then all we really know is its cardinality, i. e. the number of entities it contains. So if structure truly is all there is, then the set of facts about the world is exhausted by the statement ‘there are n things’. If one things that’s a bit paltry for a whole world, then OSR is basically a non-starter.

(Sure, there are attempts to get around this problem; but so far, I don’t think any are really convincing.)

Tegmark, to the best of my knowledge, unfortunately hasn’t really discussed these issues in any detail.

Worse than that, to me, is that the whole motivation of the proposal is misguided: it’s basically like noting that maps never really give us the whole territory, and supposing that the territory is really just made of maps in response. Not only does this invert the proper relationship between the two, I also don’t think it can even be really made sense of.

Not really. Intelligence just is intelligent behavior (that was, after all, Turing’s insight), just like playing chess just is moving pieces across the board in accordance with certain rules: there’s nothing else to it. But it’s easy to point to examples of things behaving like a conscious entity, without there being any conscious awareness associated with it (must I drag out my toe-stubbing robot again?).

That’s a different statement from the one I made, though. I said that the components have non-trivial partial aspects of intelligence, not that they are themselves generally intelligent. Calculation is something an intelligent being does, and in as much as something is able to calculate, it is identical in behavior to the intelligent being in that particular way; consequently, there is no great leap of the imagination necessary in imagining that you can take a calculator, something that produces text, something that categorizes images, and so on, and use them to produce a combined entity that has all their abilities, and thereby, comes to mimic the behavior of an intelligent being to an ever closer degree (eventually, for instance, being able to pass the Turing test).

This leap might still be one to far: there may be a gap we can never close, something that a truly intelligent being does that cannot be replicated by adding more of those partial abilities. But this is not the reasonable hypothesis at the moment—we have no reason to suppose there exists such an insurmountable hurdle.

It’s exactly the other way around with conscious experience: we don’t know of any system that possesses a part of conscious experience in the same way that a calculator exhibits a part of the behavior of an intelligent being, and even if we had, we have no reason to suppose we could combine them into a unified consciousness (as William James pointed out, if twelve people think of one word each, there is no conscious experience of the whole sentence anywhere), and we have no reason to suppose that if we tied enough stuff together into a complex ball of information processing, consciousness would just magically pop up.

So with intelligence, there is a reasonable generalization from our ability to replicate it in parts that we can replicate it in full; with consciousness, the same reasonable generalization from the absence of consciousness in simple mechanisms is its absence in more complex ones.

No. Every universal computer (which includes exceedingly simple devices) can do exactly the same things: that’s what’s known as Turing completeness. Literally the only thing you add by increasing speed and power is that they can do so faster.

Since self-awareness is an aspect of conscious experience, that’s circular. But again, if consciousness is just a side effect: how come it is always appropriate to circumstances? Evolution can’t select for the content of conscious experience that is merely a side effect, so one would have to propose new laws of nature connecting bad circumstances with bad experiences—but I don’t think anybody really would want to do so.

Yes, experiences could arise accidentally, and could be appropriate by accident. But this would require quite a stunning array of accidents: after all, we take it as given that every single time we have an experience, that experience is appropriate to our circumstances. So that accident must have happened over and over, and without fail.

Actually I suspect experiential knowledge *could *be shared. I see a technical barrier to it at the moment but nothing in the laws of physics or biology seems to preclude it. As it happens we have do also have a means of approaching this in a less than perfect way but a “good enough” way that would be expected by evolution. Language allows us to describe to another human the experiences we have. That is probably good enough in enough cases to be useful in an evolutionary manner.

By understanding in more detail how the brain works we could create a synthetic construction that mimics the precise experiential concept of “red”. Or, scan the brain to see what is going on when person A experiences “red” and recreate that brain state in person B (who perhaps is blind). The state captured by the process and transferred would indeed be experiential knowledge and could be described. We haven’t yet, we may never be able to as the challenge is too hard but so far I see no hard physical limit to it (unlike, say, faster-than light travel) We don’t really disagree on the above do we?

There are plenty of academic papers on the cognitive abilities of various animals, consciousness and self-awareness included.
here’s one, I have no clue what it says or what its conclusions are but a quick skim of the headings suggests that it addresses your points above.

Again you use qualitative terms here and it doesn’t help. Some people would indeed consider “pain” to be “bliss”. You do not even make a conscious decision to remove your toe or scream, those are all taken care of by the subconscious systems (of course, what is the evolutionary benefit of waiting for a sober assessment of a painful situation before doing something about it?) So you’ve done something, you’ve subconsciously reacted and now your conscious brain can process it, experience it, remember it and construct future actions and strategies to avoid it in the future (if harmful) or to seek it out (if pleasurable). A creature that makes such a calculation better than the others may have an edge in survival.

I said previously that an atom for atom reconstruction of a human brain could do everything and would be as conscious as we are. Cruder versions would have lower degrees of conscious thought.

Arguments against it certainly don’t bother me. One can make an argument *against *anything but I’ve certainly not seen any convincing argument that there is another way of approaching the problems of our physical world other than science.

Take an even more extreme example of a side-effect. The sound of the train itself. How can it effect performance? well if this train is in india and the sound of the approaching train clears the track of elephants then certainly performance is effected. Were a train a genetic creature then such a side-effect could be selected for, to all intents and purposes a casual impact.

and after that event, after our subconscious has kept us from immediate harm and danger, when considering strategies for the future to best avoid toe-stubbage, what are we using to decide on that strategy other than a conscious mind? Of course we run into a bit of a problem here with some of your wording because you talk about “physico-chemical reactions” as being distinct from consciousness. I would suggest that consciousness *is *a physico-chemical reaction.

Evolution *can *select for the intensity of the stimulus received as the subsequent actions of the animal may change according to that intensity. Calling it “pain” or “bliss” really doesn’t matter.

But you know, I think I’ve said all this already.

Forgive me but I get the feeling that you are almost wanting to veer into the meta-physical and mysterious on this and I just see no need for it.
If that’s not the case and you agree with me that consciousness is “just” a product of a physical, chemical, biological construct then we aren’t actually that far apart other than I am more comfortable with the fact that we don’t know enough yet.

It is not always appropriate to circumstances and it is not consistent. People under altered states of consciousness can walk off buildings or drink acid as if it were milk shake.
Secondly, Evolution can select for behaviours that arise as *result *of conscious experience and will not care a jot for the specific qualities of the experience itself.

It doesn’t have to be perfect, it just has to be good enough.

The world is littered with examples of cognitive and experiential failures, sometimes literally so because the people involved have found themselves on the wrong end of a tiger that they mistook for a hamburger or they wondered what happens when they clang these two pieces of plutonium together. As long as the benefits of a subjective, conscious mind outweigh the downsides of the unavoidable errors and mis-firings, Natural selection follows.

Wish the OP would hurry up and return. I’m dying to hear the answer.