Q. about the Boltzmann Brain paradox

To briefly summarize: The B.B.P is the notion that thermodynamically, our universe is so improbable that self-awareness in our organized universe should be vastly outnumbered by hypothetical awarenesses produced by random fluctuations- and therefore a given self-awareness should be very unlikely to find itself in a universe like ours. (See Wiki for a more detailed explanation).

My question is this: does the paradox disappear if you take into account how long an awareness can exist? For example, a naked brain that was created by a random fluctuation in a vacuum would have a survival time of essentially nill. If you counted not the total number of awarenesses but the amount of time they experience consciousness, wouldn’t that make experience like ours in an organized universe much more common?

I think I agree. (I’d never heard of the BPP before!)

An awareness that just popped up randomly would have nothing to be aware of. No body, nerves, brain cells, or environment. For it to be meaningful, an entire biosystem would have to pop up randomly to fit that awareness.

Meanwhile, we know that incredible complexity can arise by evolutionary mechanisms, where a biosystem diversifies until the point where awareness pops up, not at all randomly.

So…yeah. A random awareness might pop up via quantum randomness, and last a hundredth of a second, then disperse again. It wouldn’t have a balanced flow of energy through it, to stabilize it.

Here’s a way to reason our way out of it:

I’m having these experiences.
These experiences tell me I am a brain in a world that works a certain way.
The way my experiences tell me the world works entails that there are a vast number of brains out there identical to mine, but which exist for only a moment and are destroyed. Call these BBrains.
So then, I have a good reason to think I am probably a BBrain.
But if I am a BBrain, then my experiences are giving me false information.
And if my experiences are giving me false information, then I have no reason to believe them when they tell me how the world works.
So I have no reason to believe that there are thousands of brains out there identical to mine etc…
So I don’t have a good reason to think I am probably a BBrain.
From the above, we can draw this conclusion: If I am a BBrain, then I don’t have a good reason to think I am probably a BBrain.
Equivalent to that is: If I have a good reason to think I am probably a BBrain, then I am not a BBrain.
Above it was established that I have a good reason to think I am probably a BBrain.
Hence it follows: I am not a BBrain. QED!

Where my “proof” goes wrong, I think, is in that it uses an “internalist” account of reason-having up to line 4, but an “externalist” account after that. In other words, up to line 4 it assumes that how things seem to me in an internal sense can be sufficient to give me good reasons to believe something. But past that line, it assumes that how things seem to me in an internal sense is not sufficient to give me a reason to believe something. Rather, after line four the argument assumes some kind of external factor is necessary for reason-giving.

A more likely candidate for soundness, for not quite the right conclusion, but perhaps equally weird:

  1. My experience (and nothing else) tells me that the way the world works entails I am probably a bbrain. (premise)
  2. If my experiences tell me that the way the world works entails I am probably a bbrain, then I have a good reason to think I am probably a bbrain. (premise)
  3. So I have a good reason to think I am probably a bbrain. (From 1 and 2)
  4. But if I think I am probably a bbrain, then I think my experiences are probably giving me false information. (Premise, from the definition of bbrain)
  5. If I think my experiences are probably giving me false information, then I disregard them (premise), and hence nothing tells me that the way the world works entails that I am probably a bbrain. (From 1) (Cheated just a little bit here to avoid having to add a line and renumber everything.)
  6. If nothing tells me that the way the world works entails that I am probably a bbrain, then I have no good reason to think that I am probably a bbrain. (Premise)
  7. So if I think I am probably a bbrain, then I have no good reason to think I am probably a bbrain. (From 4 through 7)
  8. So if I have a good reason to think I am probably a bbrain, then I do not think I am probably a bbrain. (Logically equivalent to 7).
  9. Hence: I do not think I am probably a bbrain. (From 3 and 8.)

That’s a stranger conclusion than it may seem at first. The conclusion isn’t “I am probably not a bbrain,” but the different claim “I do not think I am probably a bbrain.” And again, it’s not “I shouldn’t think I am probably a bbrain” but the indicative “I do not think I am probably a brain.”

It’s a weird conclusion because, if the argument is sound, it deductively proves, with complete certainty, that I don’t think I am probably a brain. Could it really be that such a thing could be deductively proven??

Also, it’s weird because with just one more step the following can also be proven: “I have a good reason to think I am probably a bbrain and I do not think I am probably a bbrain.” That’s a weird sentence, reminiscent of Moore’s Paradox. (Moore’s Paradox is the idea that certain claims can be true about me but I can never affirm them, for example, it might be true that it’s raining and that I don’t believe it’s raining, yet I could never affirm that truth, for to do so would be to say “It’s raining but I don’t believe it’s raining.”

In general, “P but I don’t believe P” is often true for many values of P and I, but can never be affirmed by the relevant I.)

Dang! I really liked your proof!

Does the extermalist/internalist issue matter?

Let’s put it another way: the world we perceive is incredibly complex, complex enough for us to be able to draw very detailed rules about how it works. Complex enough for us to question its very existence!

This (to me) suggests a kind of Anthropic Principle. The world has at least enough “external” existence to support incredibly complex rules.

It seems wrong, then, to use those rules to derive highly simplistic ultimate solutions. Both “God did it” and “It’s just random” are, I think, too simple. Neither of those explanations invites highly complex rules of (observed) nature.

If Occam’s Razor is valid, then this world exists, and is complex. If it isn’t valid, then all bets are off, and solipsism is the only possible viewpoint.

We know that at least one condition of the the Universe can lead to self-organising complexity and large numbers of ordinary minds, like us, call them Ordinary Observers; but we don’t know whether other conditions can exist where Boltzman Brains emerge, or how likely that might be.

It may be that BBs never emerge, or that they emerge less frequently than OOs.

Here’s a recent paper that suggests that BBs are unlikely, at least compared to OOs

but since I only understand about one word in five, I can’t say how reliable it is.

Concerning the OP; we only exist in an infinitesimal moment at any one time, suspended between the past and the future; so the fact that a BB only lasts for an instant should not bother us. Perhaps our apparent continuity is only an illusion, and we exist as a series of unconnected instants that emerge at random throughout spacetime.

But something suggests to me that this is not happening; this something may or may not be reliable, as Frylock’s weird paradox indicates.

A key point is that living creatures have memories of cause-effect chains. They know that they respond to the world, and the world in turn responds to their actions. Temporary random organizations will lack that.

But perhaps one in 10^ (10^800) might, by sheer coincidence, possess the illusory appearance of a complex net of cause-effect chains.

So, if the universe is large enough for 10^ (10^800) to be a small number, then such illusions would be (gads!) commonplace.

This is the sort of nonsense that falls out of “In an infinite universe, everything that is possible will happen.”

(One comforting fact is, per Roger Penrose, such events will be very distantly spaced. We can rest assured that nothing of this sort will pop up here and now.)

If a random happenstance did mimic a detailed chain of volition and causality similar to that of human brain, it might be indistinguishable from a “real” living organism.

Julian Barbour, citing work by Boltzmann, approaches related questions from the opposite direction. Perhaps time doesn’t exist, and the memories of causality chains we experience are just “time capsules” … and arise as interesting optimal trajectories in a timeless space. (Although I’ve read one of Barbour’s books, don’t ask me to explain this!)

Going beyond the philosphical, the the physics of it - is it true that organizational complexity is thermodynamically improbable? This presupposes that some sort of clumping behaviour isn’t inherent from the start, doesn’t it?

The hot Sun shining on the cooler Earth represents a huge thermodynamic gradient. Life organizes by harnassing a tiny portion of that “order.” Seek the root source of this order in the mystery of the Big Bang. … Perhaps God just said “Let there be Light!”

Yeah, I don’t think the proof quite works, but my logic skillz aren’t sufficient to know why.
If I can paraphrase, the total argument is something like:

  1. My knowledge of the world suggests X is possible.
  2. If X, then my knowledge of the world cannot be trusted.
  3. Therefore not X

But this seems odd to me because both the X and !X scenarios are internally consistent. The jump to !X doesn’t seem right. In your argument it’s spread across 5 and 6:

Yes, I think Frylock is mixing up lack of evidence with evidence of a lack.

Following his logic, I think we reach the following three possibilities.

  1. I am a Bbrain as Boltzman suggests
  2. I am not a Bbrain as Bolzman suggests, but I still have an incorrect view of reality (e.g. I am in the matrix)
  3. I have a more or less correct view or reality, but Boltzman is wrong (e.g. his estimates of the complexity of intelligence are off).

The fact that Bolzman implies a lack of reality of our perceptions just means that we can’t distinguish case 1 from case 2.

I am not sure why you think this. Can you elaborate? Do you think a confusion between lack of evidence and evidence of lack is contained in one of the lines labeled “premise?” Or alternatively do you think one of the inferences in the argument works only on the assumption that lack of evidence is evidence of lack?

As far as I can tell, I don’t see anything in the argument that refers to an evidence of lack of anything. What do you have in mind there?

That would summarize the first version I gave, the one I said turns out not to work because of the internalism/externalism elision.

The second version doesn’t purport to prove I’m not a bolzmann brain, rather, it only purports to prove that I don’t believe I’m a bolzmann brain! (Even if I have good reasons to think I am one, indeed, particularly if I have good reasons to think I am one!)

Isn’t that just a variety of “Last Thursdayism?” (The universe was created last Thursday, but with the illusory appearance of having existed before that.)

The only problem with such notions is that they cannot be examined scientifically. There is no conceivable experiment or test that would refute these ideas. They don’t lead us anywhere.

Regarding the OP, the issue of BBrains isn’t solved by their only existing in an instant, because our experience is only ever that of an instant (William James’ ‘specious present’ notwithstanding); and besides, the continued existence of a BBrain in whatever circumstance is only unlikely, not impossible; it only serves to ‘thin out the herd’, but, given a sufficiently large universe, doesn’t snuff it out completely, and may in fact allow it to retain sufficient numbers to remain the more likely alternative.

Personally, I think it’s not much to worry: given, for simplicity, an infinite universe, my experience right now may be that of a BBrain that puffs away in an instant; but it will also be that of a BBrain that lasts a little while longer, and that of an actual person sitting in an actual room exactly the same as the one I take myself as sitting in that has just randomly popped into existence and will vanish again in a microsecond, or persist for longer, and finally, the experience of an actual person in an actual universe with the same actual history as I take myself to have. These experiences are identical: there’s no notion of distance between them, and thus, they coincide. But then, my experience of myself as that person with this history, and so on, is also veridical: there’s an actual person with that actual history, etc.

Granted, there will be BBrains (and similar statistical fluctuations) with an identical experience now, which will radically diverge from the experience that I will (hopefully) continue to have—being torn apart in a torrent of random images and experiences, say. This is part of a larger problem regarding trans-temporal identity that occurs in every setting in which there are multiple possible futures that actually occur, in some sense, such as quantum mechanical ‘many worlds’, e.g. How do I experience the one, but not the other?

I think at least a partial answer can be given by invoking the notion of algorithmic complexity. The BBrain argument rests on the assumption that that is complex which has many parts, and that the complex is less likely than the simple. In this sense, a single human brain is less complex, and hence, more likely, than an actual human being, much less an entire world containing many, and should thus spontaneously occur more often.

But this is arguably a wrong notion of complexity. It’s an infinite number of monkeys typing on infinitely many typewriters, trying to write your life history: fragments of it occur more often than the complete and faithful transcription. But in reality, it’s more like the monkeys randomly write code, which is then executed by the laws of nature. Thus, we should look at the complexity of the code, of the input, not the output, i.e. our life stories/brains/etc. And there, a single brain may be much more complex than a universe containing many instances of brains in an orderly history.

This may seem somewhat counterintuitive, but a collection of very complex things can be a very simple thing: consider that there is a very short program that executes all possible programs, while most of these programs, in their simplest form, will be enormously complex. Or consider Borges’ famous ‘Library of Babel’, which contains every book with, say, 1 million letters. Clearly, there will be enormously complex texts in there, which take, on average, no less than 1 million letters to describe; but I’ve just described the whole library in a couple of words. (In fact, algorithmically, the information content of everything is exactly the same as the information content of nothing, i.e. 0; so, creating something out of nothing is the same as deleting something from everything.)

Thus, in a world of natural laws acting on initial conditions, or analogously, in a computer executing programs, the whole Library of Babel is, in fact, much more likely to spontaneously pop into existence than any single book within it is; similarly, a universe populated with actual observers experiencing a (at least mostly) orderly history then may be actually more likely than BBrains with disjointed, random experiences only accidentally matching ours. But if this likelihood is sufficiently great, we should expect ourselves to have an orderly, veridical experience of the world.

Whew! I was following you, right up to that point!

Let me try to grok this: I can, in fact, write a very simple computer program to generate every possible book in the library of Babel. It’s just a bunch of nested “For…Next” loops, really. (Nested a million deep, but, nevertheless…)

But…spontaneously? It seems to me that, while my computer program is simple in design, it’s still extremely complex in function. It requires a million “data pointers” to keep track of the current state of all million loops. Each pointer must be incremented, and then properly reset to the beginning, many many millions of times. There’s a lot of ways that can go wrong.

The same with the spontaneous origin of the book by most other methods. A “genetic” method might very easily fall prey to polyploidy – two instances of one character. DNA certainly flubs its replication process now and then. (How often, actually? Does an amoeba ever actually enjoy a 100% successful DNA reproduction, or are there always some errors at the level where atom-meets-atom?)

Now, information theory offers us lots of ways to improve our processes. A simple checksum goes a long way, and “repeat the instructions” is the classical management tool from 3,000 B.C. to improve the transmission of commands. But these increase the complexity of the process, making a spontaneous appearance that much less likely.

(Certainly the “infinite universe” solves all such problems…but only at the cost of producing trillions upon trillions of “Almost Libraries,” libraries which are complete except for a couple of typos in one of the books. Of course, as far as intelligent life is concerned, an “Almost Mind” is probably indistinguishable from a Mind.)