How did the universe and consciousness create themselves from nothing?

Well you’re kind of putting me between a rock and a hard place.
In terms of the OP, who possibly thinks of consciousness as something needing a separate explanation from other characteristics of life…(s)he’s wrong; there’s copious evidence it evolved.

But yes if we’re having a more nuanced, scientific conversation about consciousness, there’s plenty we don’t know about the exact evolutionary path taken.

This goes along with what you said earlier, that behaviors could be selected for without consciousness, so every outcome would be the result of something like a chain of dominoes falling.

This reminds me of how, when I was in high school, pocket calculators were a new thing. I recall trying to imagine how they work, and the only thing I could come up with was that the engineers had set up this chain-of-dominoes for ever possible combination of inputs, to give the correct output. Somebody had to sit down and tell it that 57635 plus 87569 is 145204.

But later I learned about programming and algorithms, and I realized how absurd my earlier thoughts had been. It was MUCH easier and more straightforward to give it methods to find a response through generic algorithms.

That seems the same for your chain of dominoes that evolution could have used for our survival. The simpler explanation is that a generic brain algorithm to experience survival and strive for it is many orders of magnitude more straightforward to achieve.

I don’t see how you’d conclude that. Certainly, there’s nothing in physics that forces you to; and if it’s true that “nothing” is impossible, then we wouldn’t really need any more arguments.

Again, I don’t see how. What these arguments show is essentially that there’s a state of zero particles that can evolve to a state that contains particles, and a little more. This isn’t really surprising, but it also simply doesn’t have anything to do with the question of how the universe came about; merely with how an early stage of the universe may have evolved into the later stage we see now. This is hugely fascinating, but it’s being sold as something it manifestly isn’t.

David Albert put it well: essentially, these arguments show how a state of my hand that contains no fist may evolve into one that does contain a fist; but they don’t do anything towards addressing the question of why there’s a hand in the first place.

So as an answer to the question ‘why is there something rather than nothing’, this sort of thing simply never gets off the ground.

The analogy doesn’t really work, though: you had a perfectly viable model of how calculators could do the job they do; it’s inefficient, but given the necessary resources, one could, in fact, create a ‘calculator’ that simply pulls the result of every calculation from memory (which was in fact done with logarithm tables, and is still done whenever one hears the much more fancy term ‘precomputation’).

With consciousness, we don’t even have that rough initial guess. Worse, there are arguments that no such guess is even possible; certainly, no other problem has that sort of status. There are plenty of open problems we can point to, and in every case, it’s easy to at least outline how a solution might look (as you did with the calculator). I don’t know what the exact theory of quantum gravity might look like, but I’ve got a good idea what kind of theory it’s going to be, and I can envision candidate solutions.

Something like a calculation is the performance of a certain function: everything that performs that function, calculates. But the body seems, in principle, to be able to perform its functions perfectly well without there being any conscious experience associated with that. Stub my toe, recoil, emit some choice profanity: all perfectly doable without anything such as a pain experience anywhere.

Any attempt to explain consciousness so far, thus, has stopped right at what’s usually called the ‘explanatory gap’: the difference between the third person knowledge we can gather about the molecular structure of ammonia, and the way it smells. There’s lots of things we can deduce from that molecular structure: its solubility in water, whether it’s gaseous, liquid or solid at room temperature, its color, and so on (never mind that most of these things would be computationally infeasible to predict; we can take the perspective of an ideal reasoner here). But what it smells like? Whether it smells the same for me, as it does for you? I can’t for the life of me imagine the sort of story we’d have to tell to even begin answering such questions.

If you turn away from a stimulus. the processing that detects and evaluates that signal is pain. If a creature were unable to process that signal as well they would not be able to avoid that harmful stimulus and natural selection does the rest.

Your brain doesn’t bother attaching names to colours other than by linguistic convention. The reaction to different wavelengths is all that matters. If a poisonous berry is wavelength A and it’s non-poisonous cousin is wavelength B then does not matter one jot whether your subjective experience of A is the same as another person’s. All that matters is that you eat the right berry and if you can’t then you’ll be dead and your crappy genes are dust.

So yes actually seeing “red” (where “red” is the right berry) is adaptive.

I’m not sure what you mean by “experiences” here, do mean what we feel?

you are using qualitative terms there and evolution really doesn’t care.

“just” causal closure? What you describe here is exactly what evolution cares about and acts upon. Is it possible that there are non-physical drivers of consciousness? I don’t know, but there is certainly no necessity for them so it makes no sense to invoke them until required. It may be an interesting philosophical conversation for some but I don’t think it leads anywhere useful.

How complicated is this automata? Something simple like a prion or a virus or a bacteria? we can get them to react to stimulus is simple ways and we wouldn’t think them conscious but as we step up the complexity we will reach more complex behaviour and will reach that point of consciousness eventually. If ultimately you created an atom for atom reconstruction of a human they would be capable of every thing that we are, including conscious thought, they would be human.

Albert is responding to Lawrence Krauss, who adamantly makes the case I am citing.

Albert quotes him as saying:

This appears to be exactly what you are doing now. You state unequivocally that “It has properties, such as a nonzero energy expectation value; nothing doesn’t have that. Indeed, it’s literally meaningless to say it does.” Whether that state led to our current state is an interesting and disputed hypothesis. Albert doesn’t refute it in any way. Possibly there has always been something and the rearrangement of energy is a matter of a closed fist and a hand showing fingers, as Albert puts it. Moreover, he states the argument in terms exactly equivalent to my own.

I think Albert is dead wrong in dismissing this argument as trivial, as he does in his last paragraph. I think that dismissing “nothing” as a possible argument has vast implications. It undoes 2000 years of philosophy at a stroke, so I don’t wonder that a professor of philosophy would rail against it.

I agree with you,( and my original post just used your quote as a starting point, it wasn’t a criticism of what you said) I am perfectly comfortable with “we don’t know yet” as an answer. However, we can put forward lots of plausible pathways as to how consciousness might have been beneficial in evolutionary terms and so develop to the human level. We also have empirical evidence of brain complexity and increased capability and increasing levels of consciousness.
I don’t see any value in exploring anything beyond the purely physical, I see no reason why the mind should be anything other than the purely physical.

So, in my simple robot, does it feel pain? It processes the signal, and reacts; other signals, it wouldn’t have reacted.

And how does a subjective dimension arise? Why, for instance, does pain feel the particular way it does, and not some other way? Could an alien with different ‘processing’ feel something subjectively different, yet react in the same way?

But that reaction doesn’t suffice to fix what a color looks like. As an example: a newborn has, thanks to whatever genetic quirk, just the inverse wiring from us. When they experience light of wavelength A, they have what we would call a green-experience, rather than the red-experience we have. However, growing up, they’ll of course attach the name ‘red’ to that experience, and identify the color red correctly when prompted, react to red in the same way we do.

Now, my wiring is changed, to resemble theirs. Do I notice a difference?

No; reacting to light of wavelength A in a particular way is adaptive. Whether that’s accompanied by ‘seeing red’, ‘seeing green’, or not seeing anything—having no phenomenal experience at all—is completely immaterial.

What’s typically called ‘phenomenal experience’, or ‘qualia’: the way red looks like to us, for instance.

Yet, you claim that evolution is responsible for these qualitative experiences.

Exactly; that’s the problem. Evolution cares about how I react to wavelength A; it doesn’t care about how seeing red looks to me. The latter is what we’re concerned with.

I don’t think so: after all, the physical world is known to us only via its causal properties, i.e. the effects it has on us—so the idea of something non-physical impacting on the physical in some way seems incoherent from the outset.

Why? How? This is always the sort of impasse these discussions reach: if you just bundle enough of these unconscious reactions together, I’m sure consciousness somehow just sparks up.

But the difference we’re talking about is not a quantitative one; it’s qualitative.

You’d think so, but examine your own premise: an interaction of this here atom with that one there doesn’t contain a spark of consciousness. So why would it somehow arise if we put this interaction in the same room with a lot of others?

Yes, I know: water is liquid, while single water molecules aren’t. But the fluidity of water is in fact readily apparent from the properties of a single water molecule: it’s conceptually simple to derive the details of its bonding with other, identical molecules, given its configuration. Fluidity doesn’t just happen; it’s very clear how it emerges from the properties of single molecules.

With consciousness, however, the story is always: obviously simple processes aren’t conscious, but then, you put enough of 'em together, something something something, ta-daa, consciousness. Saying ‘consciousness emerges’ isn’t an answer: it’s a re-statement of the question.

The problem is, as long as the systems are simple enough so that we can easily hold them in mind in their totality, it’s completely clear that there’s not any conscious experience necessary for them to perform their function. But then, once you get enough of that together, we can’t easily do that anymore, so who knows, right? Maybe consciousness just sorta sparks up?

But in every case where something ‘sparks up’, we can easily see how and why it does. With consciousness, for some reason, people never even really attempt to address that.

Sure, if we dismiss the problem, it goes away (that’s literally why this strategy is trivial). You can do that with every problem. But the question is, what justification do we have to dismiss it? And once we ask that question, the philosophers all come crawling back out of the woodwork.

On a related note, this is one thing I’ve never quite got about this board. Usually, people on here are quite big on at least fairly considering the opinion of the experts. On vaccines, people defer to doctors; on climate change, to climate scientists. But on philosophy? Everybody seems eager to listen to anybody but philosophers. For some reason, that lot falls most often on us poor physicists; but in reality, somebody trained in physics doesn’t really have much more of an informed opinion on matters of philosophy than they do on matters of dentistry (something Krauss in particular is always eager to demonstrate).

I’m not sure I can make any more clarifying points for the first part of your response as I’d just be simply re-stating what I’d already said.

For the parts that I’ve quoted here though I say that I have no problem in imagining consciousness emerging as sensory complexity increases. I don’t see any need for it to suddenly “spark” into being. Like the 3d sonar pictures of bats wouldn’t have just appeared fully-formed but would have gradually evolved from a base-line normal hearing system with each “improvement” conferring a benefit.
So it can be with consciousness. Go back to through the proto-humans and early primates. Were we able to run tests to measure consciousness on them I’d expect to see a gradual change over the millions of years involved. I wouldn’t expect to see a single generational leap from “non-conscious” to “conscious” with Lucy-Junior being perplexed and frustrated as to why neither of his parents are able to grasp even the most simple of card-games.

I think trying to draw an equivalence with water isn’t helpful. A single neuron or even the simplest sensory structure of a procaryote is orders of magnitude more complicated than a water molecule. Once life has bridged that gap to sense organ I think the evolution of a brain-like structure and some form of consciousness is pretty much a nailed-on certainty.

If you think that there’s a split between physicists like Albert and Krauss you’ve never looked at philosophy. Nobody agrees with anybody about anything. Add that to the humongous handicap that nothing any philosopher says can be proven or shown to have empirical backing. Perhaps that’s why few outsiders take the word of a philosopher on any issue.

Dude, this is not a hard problem. Thus, I refute it. You keep talking about how we could imagine a really sophisticated robot that could act exactly like a human being, but not have consciousness. It would just have really complicated stimulus-response system and it screams when it stubs its toe but doesn’t feel pain, it complains about Mondays but doesn’t feel boredom, it goes out for a pizza but doesn’t feel hunger.

Except no it doesn’t. That is what consciousness IS. If you really could construct a robot that could act “as if” it were a person, then what that robot does IS consciousness. Consciousness is just a partial awareness of our internal state. I feel anger, but consciousness is when I’m aware that I’m angry. As for the internal qualia of whether red seems like red, here’s how I refute it: there’s no such thing as qualia. It’s a nonsense word.

When I look at a red apple, something forms in my mind and I experience the color “red”. Except I know for a fact that’s not exactly the same thing that forms in your mind when you look at the same apple, because your mind isn’t physically connected to my mind. If it’s impossible to determine that have the same qualia when looking at a red apple, then the only logical answer is that qualia don’t exist, and are a bullshit way of thinking about the problem. They are invisible intangible fire-breathing dragons in my garage that disappear when you look for them. What’s the difference between an invisible intangible undetectable fire-breathing dragon that exists in my garage and nothing? If there’s no difference, then it’s incoherent to say that the dragon exists. And also, I know that I experience colors differently than other people. I’ll look at a shirt and call it brown, but my wife will roll her eyes and say it’s green. Because my cones are slightly different, I have partial color blindness. Except I see red things, I see green things, I can tell you a green apple is green. But I can’t have the same internal qualia as my wife, because I literally see differently than she does.

What makes you think I have consciousness? Because I react kinda like you? And you have consciousness? What makes you think you have consciousness? Because you experience internal states and are aware of those states? That’s not a hard problem. Why do we understand our own internal states? Because we’re social animals who live in a complex social system and we have to keep track of the internal states of the rest of the hairless primates around us. And understanding that Thag is angry gives us an advantage in dealing with Thag. There we go. It’s not mysterious. And the “subjective feeling” we get is just how it works. Maybe Thag is like the Pyro, and when I believe he’s angry, he’s really experiencing lollypops and rainbows. But if he’s really experiencing a completely different reality than I am, why is it that I can predict how Thag will react when he’s angry?

There’s nothing magic about the state of being angry. It’s just a name we give to a particular internal state, and the reason we believe that others experience the same state is that they react the same way over and over again. There could be lots of human internal states that don’t have names, because those states are idiosyncratic, and whenever Thag tries to explain how he feels to other people, he can’t, because as far as Thag can tell nobody else feels like he does. Or maybe they do, and he just can’t figure it out.

In any case, it’s not super-mysterious, unless we redefine “consciousness” to mean something that nobody except a few philosophers agrees it means. There’s glory for you. How do I know Thag is angry? Because he acts as if he’s angry. There’s no qualia there. How do I know that I’m angry? Hey, sometimes I’m angry and I don’t even realize I’m angry. Where’s the qualia then? How can I be angry if I don’t have a subjective experience of being angry? Well, the human mind is complicated, and so is the chimpanzee mind, and so is the monkey mind, and so is the tree shrew mind, and so is the lizard mind, and so is the fish mind.

When we get down to the wormy-thing mind, maybe it’s not so complicated, and we can map exactly the exact neurons that fire to each exact stimuli, and the exact physical response. And then we can say that the worm is “just” a meat robot, without consciousness. But consciousness is just a word we use to mean a creature that reacts like a human being, so whatever it is that causes humans to act like humans that’s what we mean by consciousness. And so a meat robot that can act as if it were a human being is conscious, because that’s what it means to be conscious. And of course, humans are those meat robots. But we’re not “just” meat robots. You can’t smuggle that “just” into there.

Well then, explain it to me! Tell me how you tell the blind man what it’s like to see red.

Again, I’m not saying that consciousness can’t be gradual, that it’s some kind of all-or-nothing deal. I want to know how consciousness—any little bit of it—reduces to its material substrate. Imagine the simplest conscious being there could be. Tell me what it’s conscious of, and how that consciousness comes about. Tell me how a subjective viewpoint arises; tell me what physical facts entail which phenomenal facts. Or at least give me a hint—after all, you seem sure that story can be told, so you’ll have to have some idea of how it might go, right?

So again: once things just get complex enough, consciousness.

Agreements emerge among philosophers all the time. Mostly, that’s when people start calling them scientists, though.

That’s slightly facetious, of course. But if things were as you say, then we wouldn’t have any science, because what science is, is a philosophical question—and yes, that definition evolves, from positivism to falsificationism to methodological anarchism to Kuhnian paradigm shifts and whatnot. Not everything has an immediate right answer that just has to be discovered; not every discussion just boils down to a game of twenty questions. The object of discovery may evolve as it is subject to discussion, but that doesn’t mean there aren’t better and worse answers. That everything should have empirical backing is philosophy; it just happens to be rather bad philosophy.

Actually, I’ve talked about how we can take really simple systems that replicate really simple behaviors of conscious agents while clearly not being conscious themselves; this puts the onus of proof on those that claim by bundling such simple things together, consciousness ought to arise. Nobody has even tried to rise up to that, and nobody will.

Those are contradictory statements. Either acting ‘as if’ something is conscious is consciousness, or partial awareness of our internal state is consciousness.

And of course, the latter is question-begging: since awareness is a feature of consciousness, explaining consciousness by means of awareness is circular. The question is exactly how we can be aware of our internal state!

But that’s all that qualia are: what forms in your mind when you experience the color red.

There are, of course, more sophisticated ways to attempt to formulate an eliminativist theory of mind, though; but ultimately, all of them must eventually rise to the challenge of explaining how the illusion of subjective experience comes about. Eliminativists typically believe that this will be easier than explaining how ‘real’ subjective experience comes about, but so far, none have managed to cash in on that intuition. Personally, I doubt this can be of help: there is no material difference between having subjective internal states, and being under the illusion that one does. Whether I have a migraine, or merely believe I do, the fact is, my head hurts.

Why does something need to be physically connected to be the same? If a computer on Mars performs a computation, and a computer on Earth does, they can be exactly the same without both ever having interacted.

That’s not even close to the only logical answer; indeed, it’s not actually a logical answer at all! It’s perfectly logically possible that incomparable properties exist.

Besides, qualia are not a way of thinking about the problem; they are the problem. That we have subjective experience is just data; eliminativism is the failure of coming up with a theory that fits the data, and consequently, seeking the fault with the data rather than with one’s theories. That such desperate moves are even considered speaks to the hardness of the problem. In no other discipline does one just try to throw out all the data we have, because we can’t seem to make sense of them.

No, quite to the contrary: they are the most basic, most immediate realities we ever come into contact with. Everything else, all that we know about the external world, all that we conclude about physics, other people, the sun and the stars comes first and foremost mediated via subjective experience; they are the things we ought to be certain of most of all, whereas we may doubt anything else.

Before I can begin to know what the Moon is, I know how it feels to me to see its light.

So now you’re busy comparing those things that don’t exist, and even if they existed, couldn’t possibly be comparable!

I agree, none of that is mysterious. It also has nothing to do with subjective experience, and the problems it poses: anger in others is displayed via certain behaviors, and one’s own behavior can be adapted in response. This is all on the level of functions; what, if any, subjective experience accompanies this is entirely immaterial.

That’s of course always a good answer. Very Aristotelian. Falling down is just how stones work, it’s their nature to want to move downwards! No need for a Newton with a theory of how this works, much less an Einstein. It’s just how it works!

I keep getting confused by what it is you’re trying to argue. Do qualia exist, or don’t they? If they don’t, then how come there’s a separate experience of being angry in addition to simply being angry?

Picture a robot whose internal dynamics is solely given by a humongous lookup table. For any input, there’s the appropriate output. All it ever does is match inputs to outputs.

With a huge enough lookup table, it can act like a conscious being for any given length of time. Does that establish that it’s conscious in the way we are?

I don’t see how those observations necessitate a beginning or kick any cans down the road. The observations show the universe used to be much more dense and look very different structurally. Then it exploded. We often equate the big bang with the “beginning” of the universe, but that seems unwarranted. It’s just as likely it was one of many changes our universe has seen in its infinite history.

I don’t think you can, but nor do think that the ability to do so tells you anything meaningful about the concept of consciousness.

good, because it is empirically the case.

yes, it would be nice to know the exact mechanism wouldn’t it. We don’t…yet.

no, I don’t claim that story can be told, nor do I think that being able to tell that “story” is at all relevant. The brain is a physical entity and incredibly complex. Its workings are mysterious and we have barely scratched the surface but there is nothing to suggest that the mind is anything other than a product of that purely physical entity and so should, ultimately, be amenable to natural scientific analysis.

Well that seems to be the case yes. Conscious thought and self awareness seems to correlate pretty well to brain complexity.

Let me ask you a straight question.

Do you think that consciousness is a product arising purely from the physical matter of the brain and nervous system?
If not, what leads you to think otherwise?

Two things: first, why would it undo 2000 years of philosophy?

Second: what if they don’t dismiss it as a possible argument, but instead go for something like unto the I-Have-No-Need-Of-That-Hypothesis line to say we can’t technically rule out the possibility; but, as far as we can tell, it ain’t so?

“A huge enough lookup table”?

Dude, that’s not how the brain works. Thing is, your stimulus response lookup table has to have some sort of memory, because how else can the robot respond sensibly to human conversation? It literally can’t be a giant lookup table because it wouldn’t be physically possible to map every possible human sentence and construct multiple plausible responses to each sentence.

The reason the Chinese Room won’t work is that it can’t be actually implemented, because you’d need a room as big as the solar system, and how can you give your snappy comebacks to “Working hard or hardly working?” when the answer is 12 light-seconds away?

But to bite the bullet, despite my protestations that a Chinese Room isn’t physically possible, if you really could show me a Chinese Room that really could pass a Turing Test, then yes, I’d agree that the entire system is conscious. Not the book, but the system of memory and communication and all the dudes running back and forth fetching papyri. If you really could physically implement it, and it really did work, then yeah, it would be conscious. In the same way that your brain is conscious, despite the fact that your neurons and glial cells and what have you are not conscious.

This is like asking if a computer chess program can “really” play chess. It’s not playing chess, it’s just solving various math problems. Right? Except what’s the difference between really playing chess and only acting as if you’re playing chess? I hereby assert that there’s no difference. A drunk badger wandering on a chess board isn’t playing chess even if he’s moving around the pieces. He’s not playing really bad chess, he’s not playing chess. But if he’s sitting there obeying the rules of chess and moving the pieces around with his little paws or nose? Then yeah, he’s playing chess.

Or if you assert that a computer program can’t play chess, then how do you know a human being can play chess? How do you tell the difference between a human being who can play chess, and a human being who’s just knocking over pieces like a drunk badger? If you can tell the difference between a human playing chess and a human being not playing chess, why can’t you use the same heuristic to determine if some unknown entity is playing chess or not?

The question asked by the OP is the basis for much of philosophy. It was raised by the Greeks and it became essential to Christian belief, which powered the western world’s philosophy for 2000 years. It certainly has a bearing on being and existence and other of the fundamental arguments in philosophy like ontology. (I say ontology recapitulates phylogeny: our beliefs are based on our history of beliefs.)

Laplace gave the I-Have-No-Need-Of-That-Hypothesis line to refute a Creator, i.e. religion, in favor of scientific explanation. This isn’t even about competing scientific explanations. Even Albert, in his own words, acknowledges there is always something. Whether vacuum energy gave rise to the present universe is not the issue; that may be right or not. That there was never nothing is the issue. If acknowledged it must be incorporated. And note that no philosopher brought this into the discussion: it was forced upon them by science.*

*To my knowledge. I haven’t encountered a philosopher who negated nothing out of hand. If there is a branch of philosophy that did so I definitely would like to hear about it.

I’m no expert, but: while I don’t recall Nietzsche negating it out of hand, I also don’t recall him relying on it. I don’t, off the top of my head, recall Hume ever negating the possibility; but I don’t recall him relying on it, either. What did Spinoza believe? What did Marx believe? What did Wittgenstein believe? What, in their writings, do you figure would’ve fallen apart if they hadn’t granted this?

I assume your subconscious mind solves problems. Mine is very good at anagrams. How does it do it? Do you have visibility into its actions?
Now, I trust that when you write a post you are observing yourself writing it, analyzing what you have written, and sometimes going back to improve it. There is feedback between your conscious mind and your writing, feedback which doesn’t exist for your subconscious mind.
Our subconscious minds work better with practice, and we can program them somehow (like for driving, which is obviously not inborn.) But we need our conscious minds to radically alert a problem solving strategy. Animal do this through natural selection, we can do it in a lifetime.
That should show the evolutionary advantage of consciousness pretty clearly.
I don’t understand the pain thing. Non-conscious animals react to pain pretty much as we do, though we can do better at eliminating the source. Our reflexes don’t even need higher brain function to work. You take your hand away from the stove long before you think about taking your hand away from the stove, after all.