Have there been any autistic savants of philosophy?

I have not read all of your long discussion with Frylock, but certainly I agree with you here. I have had this perspective for some years now. This is a philosophically attractive notion to me, because it is more unifying and symmetric as a physical theory – the universe doesn’t have to be so arbitrary, it can be completely random, and yet logical narratives can be self-selected anthropically. I have thought about this a lot, but unfortunately I’ve never really been able to find a way to make any good physical predictions from the idea. Boltzmann brain type arguments tend to cause problems. [sorry in advance if this is a non-sequitur to you conversation with Frylock]

If I bet you the next sequence would not be a sequence of ones, I’d almost certainly win that bet.

Yes, of course.

Because the lawfulness doesn’t involve a law about the bits, it involves a law about the bitstrings. This is the law (the law of typical equinumerosity) that lets me win the bet mentioned above.

I disagree with this as well. I’ve explicitly rejected the bolded position a couple of times.

There do not have to be laws in order for us to be able to make predictions. But in order for us not to be crazy for making predictions, we must be assuming there are laws.

Absolutely.

If you’re interested in that concept seen through all the way, you might enjoy looking into the concept of event symmetry (if you don’t know of it already) – the idea is that physics should be invariant not just under diffeomorphisms or gauge transformations, but under the full permutation group of all spacetime events; what you get is something like the ‘mixed up’ computation I described earlier: from the inside, everything looks like a continuous physical world, even if the outside view, to the extent the concept is consistent, is basically just random dust. Phil Gibbs has written a whole book on the topic that’s available online, but just for kicks, and if you like science fiction at all, I’d suggest having a look at Greg Egan’s novel Permutation City, which I think can take some credit for originating the concept.

As for Boltzmann brains, perhaps one way to cope with them is just to accept them: in some infinite (or large enough) dust, there will be versions of you that are Boltzmann brains, but there will also be versions of you that correspond to more ordinary observers, the latter greatly diminished in multiplicity compared to the former, of course. But from the inside, it doesn’t matter: take the case of a random Boltzmann brain randomly dissipating after having been formed – you won’t notice any of this, since the observational state of the brain has a logical continuation somewhere else in the dust, in a proper observer, perhaps, or just a longer-lived Boltzmann brain; ultimately, from the inside, everything you experience will be consistent with the existence of an ordinary, physical exterior world, even if that is just a part of the truth.

Also, there’s a recent result due to Aguirre, Carroll and Johnson (here’s the paper) that essentially states that in a fluctuational cosmology, violations of the second law happen in a way time-reversed to its being observed; so that an ice cube in warm water isn’t likely to spontaneously form by one massive improbability, but slowly accrete in such a way as to appear to be ‘melting in reverse’. So things don’t spring up fully formed out of nothing, but are more likely to form gradually – though I’m not sure what exactly that means in relation to Boltzmann brains.

Well, but if you make a profit on it depends on the odds I’m willing to give you!

Why can’t we just assume we’re able to make predictions? It seems like in every case, we must do so anyway. And we’re right in at least two cases (three if you grant me that emergent laws are distinct), of which your scenario only covers one: 1) There are laws (1.a there are no fundamental laws, but lawful behaviour emerges), 2) There are no laws, but we turn out right through luck. If we assume we’re in either of the two (three) situations, we are justified in making predictions, not merely if we assume we’re in situation 1).

But of course my point was that this is yet another illustration of the lawfulness of the bitstrings–the Law of Typical Equinumerosity lets me make predictions that can allow me to consistently win bets (if I can find enough suckers). You seemed to be arguing that if I’m right about patterns and randomness, I should be able to get rich at the casino (meaning of course the casino would be the “sucker” in that case). I was pointing out how you are right in a way, but have misunderstood my claim in another way.

Some assumptions available in the neighborhood of the ones you listed are:

  1. The world is lawful
  2. The world has seemed lawful so far, but only through luck–it’s about to stop being lawful.
  3. The world has seemed lawful so far, but only through luck–but lucky for us, it’s going to keep seeming lawful.

If I assume 1, I can make predictions. If I assume 3, I can make predictions–but then on my account of laws, 3 collapses into 1. (I’d say 3 is the assumption that turns out to be true in your bitstring world–and eo ipso 1 is true as well in that world.)

If I think 2 is an open, live possibility, then I cease to be able to make predictions without being incohereht. If I think 2 is possible, then I now have no reason to think that what I’ve observed so far has any bearing on what’s coming down the pipe.

BTW I think that we have to assume we’re not Boltzmann brains (even if at the same time we might acknowledge that it’s possible) if we intend to coherently make predictions about our environment.

I just balk at calling random bit strings lawful – there simply is no law that describes them. The complete state of this world is obtained through random coin tosses. There’s nothing else, no causal force, no ordering principle that goes into the creation of a state of this world. If that’s lawful, I don’t think I know what it means for anything to be lawful – much less to be lawless. An ordering of an approximate sort emerges, but it does so incidentally – it isn’t put in by hand, up front. I think this is a distinction I want to be able to make: between laws that hold in the analogue of bit string worlds that follow some pattern, that are reducible, like ‘01010101…’, that are ordering principles that have to be obeyed in the construction of states of these worlds, and laws that emerge, that hold despite nobody putting them in, constructing a state of the world with these laws in mind. If you collapse cases 3 and 1, you loose that distinction, but to me, the two worlds are fundamentally different: in the ‘regular’ world, there is an irreducible question about the origin of the regularity, while in the ‘random’ world, the apparent regularity emerges naturally.

Not if you live in a large enough universe, for to every Boltzmann brains’ observer moment in such a universe, there exists an identical observer moment of a ‘real’ observer living in a ‘real’ environment, such that if the Boltzmann brains’ observer moment vanishes, it nevertheless finds a continuation in the real observer. So that if a Boltzmann brain forms, is in some state s[sub]n[/sub], and then dissipates again, there also is a real observer that goes through a sequence of states …s[sub]n-2[/sub]s[sub]n-1[/sub]s[sub]n[/sub]s[sub]n+1[/sub]s[sub]n+2[/sub]…, such that the subjective experience would not end with the Boltzmann brain, but continue uninterrupted, ‘as if’ there were a real world out there.

So even if I’m now a Boltzmann brain, with overwhelming likelihood, now I’m not anymore, the Boltzmann brain not having cohesion for more than a few instances.

Though very interesting, certainly the concept is NOT seen through all the way. These speculations are very similar to some of my own and some of my friends, but what I have never been able to do (or seen from any) is produce any specific examples starting from first principles and derive specific predictions from them. And starting from something as high-level as strings (for example) and studying event symmetry, by the way, doesn’t please me in the least.

But surely you would notice that the universe around you wasn’t so stable. The vast majority of stable Boltman’s brains would be surrounded by instability, rather than the stable universe we all observe around us. This is the problem that I’ve never been able to surmount.

In other words, it seems to me that if you construct the set of all Boltzman brains (or anthropically-self-selected narrative subsets of a lawless event-symmetric superset, if you wish), there are:

  1. ephemeral ones
  2. ones that are stable themselves but their surroundings not
  3. and those like us

3 is utterly suppressed statistically relative to 2 and 1. And while 2 is suppressed relative to 1, what you describe, the logical continuation somewhere else in the dust, is vastly more likely to resemble 2 rather than 3. Any way I look at it, you end up with 2.

You can keep the distinction. It’s a great one and I’m glad you’ve emphasized it.

Let’s call it a distinction between fundamental laws and emergent laws. (This is according to your own wording above.)

I’m saying to make predictions you must assume there are laws, whether they be emergent laws or fundamental laws. You might be wrong about this assumption, but you must make the assumption in order make predictions without being incoherent.

But if I might be a Boltzmann Brain, then I don’t have the first clue how large the universe is. If the BB-possibility is “inductively innocent” on the condition that the universe is large enough, then since I can’t have any idea how large the universe is if I am a Boltmann Brain, the possibility that I might be one is not “inductively innocent” for me. If I think it a live possibility, I’ve lost the ability to coherently predict.

The universe might be too large, too. There might be several versions of me corresponding to the Boltmann brain that I might be, each of which is going to have radically different experiences after I (the B-Brain) disappear. If I consider this a live possibility, then once again my ability to coherently make predictions is thwarted. I can’t make coherent predictions if I think it’s genuinely possible that absolutely anything might happen with no measurable probability.

Socrates claimed he didn’t know anything and proved it, yet Plato considered him a great philosopher. Makes him more of an idiot savant.

OK, sorry then.

Well, the idea is that a lawful, regular world, just like a regular bit string, is much less complex in an algorithmic sense, or much more compressible than a less regular world, or a world in which laws only hold up to a certain point. So its computation corresponds to a comparatively small, and therefore quite frequent, pattern – so you’re more likely to land in a lawful world.

Can you be wrong about there being emergent laws? I’m not sure I can conceive of a system in which there aren’t some properties like the distribution of 1s and 0s that end up being regular – in part because you can map any random system to random bit strings.

I probably shouldn’t have talked about the ‘size’ of the universe at all – it’s not a very simple concept when you’ve abstracted beyond space and time as primitives. And even keeping naive ideas of some concrete notion of size, as long as time goes on forever, it’s effectively infinite from the perspective of Boltzmann brain formation.

I am having trouble reconciling the idea, say, of the universe consisting of random bit strings, and algorithmically compressible worlds being more common. It would seem to me that short patterns would be infrequent, in the same way that numbers near zero are infinitesimal relative to the set of all numbers (the same reason why people are concerned about the small size of the cosmological constant, and other fine tunings required in QFT). If I consider random bit strings, I come to a different conclusion: that algorithmic complexity is far more common that algorithmic simplicity.

I have in mind something like: let our observation at some point be described by a bit string, x. The algorithmic probability of this string arising through random computation (i.e. by feeding some Turing machine programs generated by coin tosses) is m(x) = Σ2[sup]-|p|[/sup], where the sum is taken over all programs p that cause our Turing machine to halt and output x, and |.| denotes the length of some bit string. It can be shown that m(x) ~ 2[sup]-K(x)[/sup], where K denotes Kolmogorov complexity (and I use ~ to mean something like ‘in the general vicinity of’; formally, there are constants c, k such that c2[sup]-K(x)[/sup] < m(x) < k2[sup]-K(x)[/sup], I think). So it’s exponentially more likely for our observation to be due to some short program, i.e. that we find ourselves in a lawful world. For some related, more in-depth observations, this paper is quite useful and interesting.

I am having trouble following the details of the derivation (I have no training in algorithmic complexity theory), but it seems that we might be working from a different premise, in terms of the set we are sampling. It seems intuitive to me that if you sample the set of all possible computer programs, you are vastly more likely to sample a complex one than a simple one (for a start, you are vastly more likely for the program’s length to be near infinity than near zero, to use a lazy terminology analogous to sampling the real numbers that I hope you understand). I still don’t see why my reasoning in my last post is wrong. Maybe you can help me get a better intuitive understanding? I suppose it’s possible that you can show something like that infinitely long computer programs are infinitely more likely to be factorable in some sense into simpler programs (again being lazy in my description, just hoping you sort of get what I’m saying)…

BTW, HMHW, I don’t have the patience, time, and probably not the training to go through that whole paper you linked to, but I wanted to say that it is really fascinating! It’s really the sort of thing I’ve been thinking about and have long wondered why it hasn’t been studied more as a TOE. It’s amazing that such interesting work is hiding out in the open, not getting much attention (at least for me to be aware of it out in physics land, maybe it’s well known in comp-sci land). Anyways, if you have any other links I’d be interested. Great stuff.

Fuck.

Just forget I said anything.

Anything, ever.

In algorithmic information, one typically works with programs on prefix-free sets, i.e. if x is a valid program, no string of the form xy can be; this is to ensure that programs are self-delimiting – if a machine has been given x, it knows the input is complete, so there is no special ‘stop’ character needed. This requirement is needed to make things come out as actual probabilities, due to Kraft’s inequality.

Perhaps this helps intuition: between 0 and 1, only one can be a valid program, otherwise, they would be the only valid programs. If one is a valid program (say 0), the set of all possible remaining valid programs gets halved, as only those strings starting with 1 can now be valid programs – i.e. 01, 00 can’t be valid; 10, 11 may be, on the next level of complexity. If neither 0 nor 1 are valid, all of 00, 01, 10, 11 may be; if one of them is, 1/4 of all possible remaining strings are removed from the set of valid programs. So the set of all possible programs is thinned out more and more with each valid program. If, for instance, 0 is a valid program, then the probability of picking a simple program is already 50%, even if all other programs are horribly complex messes.

You can view this as a measure on Cantor space, interpreted as the set of all infinite bit strings; all those bit strings whose prefix forms a valid program (an infinite set if there are any valid programs) get ‘lumped together’. Alternatively, think of it as the pruning of a complete binary tree: at any node, you choose randomly which way to go down; if you have hit on a valid program, you’ll end on a leaf, and all the bit strings formed by routes that would extend further don’t enter.

It’s indeed a somewhat underestimated line of research, I think; Tegmark’s work is probably the only thing I know of that’s ever made it into the mainstream.

However, there’s an earlier paper by Schmidhuber (who generally comes out with fantastically creative stuff, like explaining art, humour, and science in terms of data compression) that contains the roots to many of his ideas in this regard, if I remember correctly. In a more general vein, Scott Aaronson recently put out an interesting paper about applications of computational complexity to philosophy, which is also rather long, but quite easy to read if ‘P vs. NP’ rings any bell at all. (It’s also got a section on Hume’s problem of induction, so perhaps this one is interesting for Frylock, as well!)

A bit of an overview about such ‘ensemble theories of everything’ is provided by Russell K. Standish in his book Theory of Nothing, where he makes the delightful point (also made in part by Schmidhuber, I think) that nothing and everything are really the same, information content wise – both contain absolutely no information, so we really shouldn’t marvel at how something can come from nothing.

:confused:

I’m having a hard time seeing how this premise is motivated. Why would programs (in the ensemble that constitute our universe) be expected to be self-delimiting?

I am a big fan of Tegmark, but it’s great to see this other work. Ensemble TOEs are the only TOEs I’ve found philosophically satisfying. I think I’ll pick up a copy of the Standish book. Thanks!

There’s no need to motivate this – one can build universal prefix-free machines, so it works just as well as any other computational paradigm, but there are technical difficulties in defining a suitable measure in this case (though Schmidhuber does this somewhere, I believe, working with Turing machines whose alphabet contains a special ‘stop’ character). But thanks to universality, the conclusion is independent of the kind of machine used.

You’re welcome! :slight_smile:

Heh, sorry, was in a mood. The paper reminds me that, as in every endeavor I’ve ever undertaken, I can appreciate the awesome but I cannot create the awesome. I know you didn’t ask for that but anyway, that’s what was happening in my post.

Ah. I can relate, I think – I normally tell myself that well, it’s a good thing that there are people that come up with stuff like that, and at least I’m around to appreciate their work, but sometimes, that’s quite hollow comfort. But maybe, reciprocally to those most ignorant tending to be the most confident in their skill and knowledge, intellectual insecurity and self-doubt signals at least the beginning of understanding! :wink: