Are the physical possibilities the same as the logical possibilities after all?

I certainly agree, but without more guidance from Frylock, I can’t think of any definition more consistent with his apparent position.

Well, it’s at least common enough to find numerous examples quickly, i.e. within the simplest rule systems you might come up with. Can we agree on that, at least?

Furthermore, I think that Wolfram has made a strong case that computational universality is indeed a very common phenomenon, but I recognize that the case isn’t airtight, owing to the fact that he obviously can’t search through all possible rule systems (not to mention the difficulties involved in actually proving universality).

(Also, just to clarify: you’re probably just being sloppy in your terminology, but for any onlookers – if there are any left! --, computability and computational universality are two very different things; roughly, something is computable if there exists a computationally universal system that produces it, and conversely, a system is computationally universal if it can compute everything that is computable.)

Perhaps you’re right, and I over-reacted; however, I think you mistook me for stating much more certainly what was intended as just offering another perspective.

The rule is the software – or perhaps more correctly, there’s no software at all. Basically, what we view as ‘a computer executing software’ is just one universal system emulating another (which is not necessarily universal).

The way I see it, you can always subsume the initial conditions into the definition of the rule – i.e. for any rule generating some behaviour from a certain initial state, you can find a TM reproducing that behaviour from no input at all. So different initial conditions for the same rule can be viewed as different systems, which can of course be mapped to one another. The initial conditions thus don’t actually matter.

I certainly did have a brainfart there, but I’m not even sure what kind…

We can agree on that. But merely being able to find some examples in a near-infinite set is not impressive. There is extreme selection bias in such cases. One can also fine numerous counter examples quickly. And in more complicated systems (ie non-automata, say physics), the situation is much worse. If, for example, the universe collapses to a black hole if you don’t tune your constants right, you surely can’t get self-aware structures…

Maybe we are talking past each other here, because I am pretty confused what your point really is here. Consider Conway’s game of life. It’s a set of rules. For the sake of discussion, let’s suppose the initial conditions are: an empty grid. This system time-evolves to: an empty grid. Surely this perpetually empty grid cannot support self-aware structures. What can this empty be mapped to that can support self-aware structures? Now let’s suppose the initial conditions are:


This system merely flippy-flops back and forth between vertical 3 cells and horizontal 3 cells. Again, this does not constitute self-aware structures. What can this be mapped to that does?

Etc. Etc. The vast majority of initial conditions yield trivial structures, like fixed blocks or various flippy-floppies or floaters. It takes an extremely fine-tuned set of initial conditions to yield a UTM in Conway’s Game of Life. People have done it. It takes a lot of work.

Of course, these are very atypical examples – grabbing from a bag of possible initial conditions, coming up with those is infinitesimally likely. Typically, you’ll get something more like a random distribution of on- and off-cells, which, in the limit of an infinitely large grid, will contain structures necessary to carry out any kind of computation.

But that’s not the point I’m trying to make. Key is that even for a ‘boring’ evolution as the ones you pointed out, there exists a Turing machine taking it as an input and returning our universe, or anything equally complex. You can map the set of possible evolutions uniquely onto itself (or onto any other set of evolutions of a universal system) such that, for instance, an evolution of great apparent complexity gets mapped to a ‘boring’ one, but some boring one gets mapped to something very complex-looking.

The thing is that this is possible only if the system (or systems) in question is universal, and that’s both a necessary and sufficient condition. If you have a system that yields only boring evolutions on one side, no one-to-one and onto map to the set of evolutions of a universal system is possible.

It’s the complexity of the rule that matters, not the complexity of the picture it produces; that picture is just one possible interpretation, in a manner of speaking, just like any Turing machine acting on the picture yields another interpretation.

Think about some universal system carrying out a computation – to shake intuition up a bit, picture a ballistic computer – not something build to calculate the trajectories of military ordnance, but rather, a computer made out of gravitating bodies, ‘orbiting’ each other. It’s possible to build a universal computer in such a way. To compute something, you arrange a certain configuration of the bodies, and then let them fly (or fall, rather); once some predefined condition is achieved (the ‘halting state’), you can read off the result of your computation.

Now, imagine somebody’s gone through the trouble of doing this, and then, shows you a film of the computer in action. Afterwards, he asks:“What was computed?” – how do you answer? Well, you can’t – it could have been anything; literally. It all depends on what the elements of the initial state are supposed to symbolize, what kind of movement is to be taken as what kind of a computational step, and what the output maps to, and how – it depends, in short, on the interpretation. And this interpretation you can choose freely, meaning that you can map the ballistic computer’s evolution to any other universal system’s, and hence, to any computation that can be carried out – and if (and only if) the ballistic comp is universal, you can consistently keep applying this map and obtain sensible results for any other possible ‘computation’ that might be shown to you.

You could have chosen any other map, any other interpretation; or you could have been shown any other evolution of the ballistic computer, and given it the same interpretation. It doesn’t matter what the computer ‘actually’ does; only that it is universal matters. In the same way, what Life does, and thus in particular what initial conditions you choose, doesn’t matter, either.

Actually, those are not atypical examples. Infinitesimally likely? Have you ever played with Life? The vast majority of initial conditions quickly converge to such examples.

I’m not buying it. Give me an example of an isomorphism between a blinker and anything aperiodic. The mere fact that a turing machine can take something simple as input and generate something complex is a total non-sequitur – that does not a duality make.

As initial conditions, they’re atypical, because there are obviously vastly more random distributions available.

A Turing machine taking the elements of one set as an input and returning the elements of another set is an isomorphism between those sets. I’m not talking about any given evolution, but about all possible ones. Are you saying I can’t find one such that the blinker is mapped to something complex?

The point I was making by listing simple examples and ending with “etc” was that the vast majority of initial conditions quickly reduce to trivially non-complex behavior. You can test this yourself by randomly filling an NxN grid with cells and evolving the states…

You can surely find a map. But you can’t find an isomorphism. Isomorphisms are quite restrictive. It is impossible to construct an isomorphism between a blinker and something aperiodic. A Turing machine taking the elements of one set as an input and returning the elements of another set is not an isomorphism between those sets.

This ceases to be true in the limit of infinitely many cells. But anyway, as I said, it’s not the point I was making.

Any bijection between two sets (that lack any extra structure) is an isomorphism. I think you’re missing that I’m not talking about a mapping between individual evolutions, but rather, between the sets of all possible evolutions. (In any case, finding any kind of (bijective) map suffices; I only used the term isomorphism because you brought it up.)

Well sure it is. You can just write it as a ‘lookup table’, on one side the elements of the one set, on the other, elements of the other. Surely, you agree that two sets that can be brought into such correspondence are isomorphic, and that this correspondence is an isomorphism between the two?

An isomorphism is not merely a bijection. It is a structure-preserving bijection. I’m not sure what you mean by “that lack any extra structure.”

You aren’t using ‘isomorphism’ correctly then. In any case I don’t understand what you mean by a mapping between the sets of all possible evolutions. The set of all possible evolutions of course includes complexity if it is Turing complete. But I don’t see how this is relevant to the anthropic principle. Unless your argument is that the MWI implies that for any collection of rules all ‘histories’ are realized. Is that your argument? Then your argument would make a whole lot more sense. But it still wouldn’t be correct – the MWI is not applicable to any arbitrary collection of rules.

If you consider the set of all possible evolutions then maybe. It seems perfectly reasonable to me that in many cases you could form an endomorphism but not an isomorphism, the Turning completeness is a necessary but not sufficient condition for what you are describing.

Basically, just that it is a set, without any further algebraic structure – i.e. the only structure it has is its elements. For sets, it’s indeed the case that any bijection between them is an isomorphism.

Well, if two evolutions are of the same complexity whenever there exists a Turing machine translating between them (which you have previously agreed to), then, since there exists a Turing machine taking, for instance, the ‘boring’ empty-grid evolution to the evolution corresponding to our universe, those must be of equal complexity. For instance, if we enumerate the evolutions of Life, starting with 0 as denoting the empty grid, 1 as denoting the evolution resulting from the upper left cell being active, etc., and writing U for an evolution corresponding to our universe, we have the following isomorphism:

0 -> U
U -> 0
1 -> 1
2 -> 2
.
.
.

Really, think about the ballistic computer example. There’s no way to tell, from watching its evolution, that this is what is really being computed – it could compute anything, from the square root of two to the first place in the decimal expansion of pi that your birthdate occurs to the likelihood of observing supersymmetry at the LHC. It’s all in the interpretation.

The pictures Life generates are an intuition pump – they lead you to think that in some sense, this is what ‘really happens’. But they’re just an interpretation, as good as any other!

An endomorphism? Did you mean to say homomorphism? Anyway, no: that for any evolution of two universal systems U, U’ with initial conditions x, x’ I can find a bijective map f such that f(U(x)) = U’(x’) is precisely equivalent to the definition of Turing completeness.

I agree with the above. I don’t see how it is significant that one can form bijections between two sets of all evolutions. Although you seem to go back and forth between referring to initial conditions or ‘sets of histories.’ Very hard to follow.

I didn’t agree to this. And can you back up and answer my questions in the last post? I still do not understand what you mean when you say:

I do not see how the above or anything else in the last post is relevant to the anthropic principle. Nor do I agree or understand how the “empty-grid” evolution can be mapped isomorphically to the evolution corresponding to our universe. Again, how about the blinker? You still haven’t addressed that simple question…

Believe me, I’m familiar with dualities. The ballistic computer can perform some less than intuitive calculations. But can a blinker? NO! Can an “empty-grid”? NO!

Perhaps if you provided a cite I could figure out what you are saying. This doesn’t seem plausible to me.

Well, since every computable system is deterministic, a history is determined by the initial conditions; one is just another name for the other.

You did agree to my saying this:

Just what I said – if it doesn’t take any additional information to translate between one and the other, or recover one from the other, then both must be of equal information content, or complexity.

But I’ve explicitly provided you with the map!

But they can be mapped to these calculations. Really, I don’t know how else to express myself – it’s just about the interpretation. Perhaps think about the ballistic computer running a simulation of the Game of Life; simulating the empty grid evolution might entail quite a complicated dance of ‘planets’ around one another, or it might mean no interaction at all – but the same goes the other way around, if you think about the Game of Life simulating the ballistic computer. The empty grid’s evolution, fed through some interpreter (say, an ordinary computer displaying the evolution of the ballistic computer on its monitor), might well cause that interpreter to display very complicated movements!

It’s simple. Let’s think about two computers, one a Mac and one a PC. The Mac can execute some program – this is its evolution (with the program as initial condition). The PC can emulate the Mac (by computational universality). In particular, it can emulate the Mac executing its program; so, there exists a PC evolution equivalent to the Mac evolution, for any evolution it could undergo. The converse is also true – for any PC evolution, there is an evolution of the Mac emulating the PC undergoing this evolution. Thus, the evolutions of the PC and the Mac can be brought into one to one and onto correspondence.

You missed my point: “sets of histories” as opposed to “histories.” You are not being consistent. Sometimes you seem to refer to isomorphisms between histories, other times between “the set of all histories.”

Where?

No you haven’t. As an axiom of your “map” you defined the “empty grid evolution” as isomorphic to “evolution corresponding to our universe”. I have no idea how on earth those two are isomorphic to each other. Additionally you continue to avoid explaining more simple examples such as how a blinker can be isomorphic to something aperiodic.

The empty grid’s evolution consists of operating on the same state over and over again. The state never changes. The operations, and the results of the operations never change. I do not see how this could possibly be isomorphic to operations on states that change over time.

Yes, you’ve just described a perfectly reasonable duality made possible by very specific software implementations. Just because such emulation is possible does not mean that the mere existence of the possibility of such emulation implies that every such initial condition and its evolution is equivalent to any other.

OK, an isomorphism between two sets of histories provides a map that pairs one history from one set of histories with one history from the other set of histories. That clearer?

This post.

I’m not sure how to make sense of this. An isomorphism between two sets is just a map assigning one element of the one set with one element of the other. The isomorphism is not between individual evolutions, but between the sets of evolutions, and maps evolutions within those sets onto one another.

I’m not avoiding to explain this, it’s just that the explanation is the same I already gave!

Do you think that the state of your computer simulating the empty grid never changes? For starters, it’ll have to keep track of the number of generations that have been simulated; this alone can be realized in an arbitrarily complicated fashion.

No. That duality is possible because both systems are (or at least approximate) universal computers. That this duality – and a similar duality to an arbitrary universal Turing machine – is possible is exactly what ‘computational universality’ means.

That’s exactly what it means. Think of a code – a simple substitution cipher, say, replacing each letter with its number in the alphabet. Surely, you’ll agree that ‘8 1 12 12 15’ and ‘hallo’ convey the same information. Also, it’s plain to see that everything you can express using letters, you can thus express using numbers. This is a condition analogous to computational universality (provided you can express anything expressible using letters) – let’s call it expressional universality.

However, you could devise a different code – let’s say one in which each number stands for an English word according to its place in Merriam-Webster’s or whatever reference you care to choose. I’m not going to bother producing an exact example, but still, you’ll probably agree that a string of such numbers contains the same information as the sentence it encodes.

You can continue playing this game – say, enumerate all possible paragraphs of some fixed length, or come up with something more clever; it’s non-trivial to show that your code is expressionally universal, but in general, you’ll be able to find one in which, say, 1 stands for the complete works of Shakespeare, while some unimaginably huge string of numbers encodes the word ‘and’. Not particularly wieldy (that word doesn’t exist on its own, does it?), but possible in principle.

Now, if you agree that in the previous examples, the coded messages have the same information content as the ‘uncoded’ ones (which are actually coded as well, just in a code we are so intimately familiar with as to not notice it), I don’t think you can disagree with that in the last example – after all, every code is just an assignment of symbols to other symbols; it doesn’t matter one bit how ‘complex’ we think those symbols are.

And the same goes for evolutions of universal systems – which can, after all, be represented in terms of symbols, as well. So just as much as 1 can stand for Shakespeare’s works in their entirety, the empty grid (or blinker, if you prefer) evolution can stand for the evolution of our universe. It’s just a matter of the code – or, as I previously put it, of interpretation.
Anyway, I think I’ve spend enough time on this, at least in this thread, and more than likely hijacked it all to hell with it. If there’s still some confusion or open questions, I’ll try my best to answer elsewhere (feel free to PM me or something), but honestly, there’s nothing terribly deep here, and I’m not trying to pull a fast one on you. Perhaps this is one of the things one has to just ‘get used to’, as von Neumann said… Anyway, I’ll bow out of this thread – perhaps there is an infinitesimal chance it can yet get back on track, or else at least die gracefully. :wink:

Not really, because at times in this thread you have claimed that one history (for example the “empty grid” history) is equivalent to any other. But then other times you wiggle out of the ensuing contradictions by claiming that its really only the “set of all histories” that is equivalent.

I responded “yes”, and I apologize for not being clearer, because I did not at all agree with all of that paragraph. I agree that “here’s an issue here about how to define complexity” and that “here is a Turing machine such that, if it is given one of the bit-strings on its input tape, it will produce the other on its output tape”, but I do not at all agree with where you go with that premise. I do not agree (and I think its trivially untrue) that the mere existence of the ability to compute universally allows one to declare that the set of outputs from any TC systems are equivalent.

Right, and all you have done is provide a trivial mapping of cells in the empty grid to cells in a non-empty grid. But that mapping is only valid for the one set. The next set in the evolution will require a different mapping. So yes, for any given ‘time slice’ you can find an isomorphic between cells on two NxN grids, but that is totally different from declaring the set of all evolutions to be equivalent. That would require that the mapping hold for all evolutions. No such mapping exists in general. Its true that there are very specific cases called ‘dualities’ in physics, and they are highly interesting. In these cases the two “rule sets” are isomorphic to each other. You are getting tripped-up because you are confusing TC-ness with what is being computed. For example, Life is Turing Complete, but only in the sense that you can elaborately set up initial conditions to create a UTM. The empty grid in life is not TC.

Wow. You are really reaching. We are not talking about the state of my computer simulating Life. We are talking about the rules of Life. The rules of life are simple and non-arbitrary, and do not require keeping track of the number of generations simulated. The rules of life are only TC because they are capable of universal computation, not because it’s computations are always equivalent to any other.

This is not my understanding, not without an incorrect definition of ‘duality’. The ability to simulate another computer does not imply “equivalence to it” in the manner you seem to think.

I’m certainly with you so far… nothing here is foreign to me… I’m used to thinking this way…

yep…

This is where you go wrong. First of all, in your previous paragraphs where I agreed with you, I did so only with a certain implicit understanding, one that you failed to address – that is, the information content after the mapping [complete works of Shakespeare]–>[the number 1] is only retained if the the “decryption mechanism’s map” (our brains for example) are mapped likewise. For example, If I were to say “1” to you, it would evoke an unimaginably complex set of reactions and associations, because you have essentially “rotated” the symbol decryption in my brain in such a way that the simple symbol “1” actually “means” something very complicated. The symbol “1” would never makes sense to anyone unless they had read (and could retain with perfect memory) the complete works of shakespeare – or had been born with the proper decryption mechanism in place. It is very important that you cannot view the symbol mapping that you are describing in isolation – and likewise you can’t ever say that “the empty grid” is equivalent to “the universe.” You can, as an observer, form a language in which you use the empty grid’s evolution as a descriptor for the universe’s evolution. But this is different from calling the empty grid’s evolution isomorphic to the universe’s evolution. You can map individual time slices between them, but you cannot isomorphically map the empty grid’s evolution to the evolution of the universe. The information has to be stored somewhere.
[/QUOTE]