A future civilisation might possess the computation capacity of billions of Matrioshka brains, and could routinely simulate both the past and the future in exquisite detail with as much effort as we simulate the universe of GoT or Jane Austen in our own human minds.
To predict their own future, such entities would need to model that future in various amounts of detail; and to understand the future, a certain amount of modelling of the past would be a useful or even essential exercise. Such simulations might last mere instants, or an arbitrary number of years, and model just a small area, or entire planets.
In this post, I was talking about the algorithm being incremental versus some more direct and less computationally expensive operation. The example I cited was multiplication versus addition, specifically multiplication by a factor of two. You could compute 2x with x number of addition operations, or you could use one bit shift / multiplication operation provided by a hardware shortcut, a times table, or what have you.
Alas the universe is not wholly fractal, and symmetric. If you had an observer outside the white box, you could not get away only drawing things in the white box. Which was my point.
I’m not sure what the relevance is, since I’m assuming a more or less deterministic simulation, so one thread only. You seem to be assigning zero cost to computing the plot between decision points, which is of course not true. Say someone started at G. You would not have to compute chapters IV - IX, but you still need to to I - III.
You’ve basically drawn a control flow graph here, and they are not zero cost to construct. The simulation I mentioned was constructing the necessary chapters, which any writer can tell you is also not of zero cost to construct. That’s where all the cost comes in - I was assuming basically zero cost in constructing the plot line since it is predetermined.
You’ll have to explain this to me, it doesn’t make sense. If the white box is the only part simulated there are no observers outside of it, but still within the simulation.
Sorry. Where I was going is that the simulators don’t have to simulate things that none of the simulated entities look at, until they do. Kind of like in Heinlein’s “Them” where the rulers don’t build Paris until the main character decides to go there. (Old technology, for sure.)
So, if the white box is kind of an event horizon, you don’t need to simulate anything outside the box. But if there is an entity outside the box you need to simulate more than one piece.
Okay, long geeky stuff coming up - ignore if you wish to, I won’t mind.
I have written fault simulators, which are gate level simulations of circuits with various faults injected, for instance the input of a NAND gate stuck-at 1, which means it is always a 1 no matter what is driving the line. The idea is that you run a test and see if the change in behavior caused by the fault propagates to an output. If it is, the fault is detected and you can drop it.
Now the naive way of doing this is to insert a fault, run the test until it gets detected, insert another, and so on. This takes forever. So what we do is to always simulate the fault-free circuit, and only simulate the parts of a faulty circuit that could be different when the fault is inserted. This saves tons of time and memory.
More like what we are talking about, other simulators figure out which parts of a circuit might change for a particular input, and don’t bother to simulate the part of the circuit that doesn’t change. So, if stars didn’t vary, for instance, you could simulate the star statically and save a lot of computing power. If you want the star to go supernova, then you can start to simulate its new image as it propagates through the universe.
Obviously this isn’t happening, because when we focused Hubble on stars when searching for planets, we found that they varied as the planets passed in front of them. That’s what I mean when I said that it takes more power to figure out who is looking and change the image only when it can be observed than just simulating the star and planets even when no one is looking. (Probably a fairly cheap simulation if no one is close.)
Okay, no, the white box is not an event horizon. It represents the results of the simulation. You had previously written, “don’t tell me you won’t simulate the stuff no one is looking at”, and the white box is represents what is being looked at by whoever is running the simulation. If we were talking about rendering computed tomography, the “white box” would be analogous to the video display buffer.
You will notice even in my animation that one of the line segments (bottom of white box) lies partially within and partially without the white bounding box. In the case of CT data, perhaps a polygon mesh is used and some of the draw calls specify points slightly outside of the display area.
We have solved that problem, with the invention of cinema! Watch, as narrator says, “he went to Paris”, and then lo, behold Paris!
It is as I wrote before, you seem to be operating under the assumption that a simulation must necessarily require relatively expensive calculations. This is a practical assumption but it is not metaphysically sound.
Let’s say your fault simulator produces an output like this,
Gate 1: OK
Gate 2: OK
...
Gate 32: OK
Now examine the following program, which simulates your program as it checks that particular circuit,
fn main()
{
for n in 0..33
{
println!("Gate {}: OK", n);
}
}
Metaphysically speaking, all knowledge you possess is derived from the senses; if reality is subjective it is predicated on personal knowledge, thus a simulation of reality need only provide things which you actually percieve. Your fault checking program is similar to a sensory organ, the only thing that counts is its output. And in all cases there are much simpler ways to produce the same output without expensive incremental logic. No matter what computations might appear necessary in practice, metaphysically speaking there is always an Oracle which produces the same output with no computations whatsoever. The result of an algorithm is always indistinguishable from the result without the algorithm.
The obvious criticism is the one you raised earlier,
It is distinctly possible that a series of random guesses should produce the text of any chapter of any book that ever existed or might exist. It may not be plausible but you cannot assign a definite probability to an individual event drawn from an infinite pool of potentialities. As they say, a monkey hitting random keys on a typewriter could, with infinite time, turn out Shakespeare. A hypothetical universe that experiences random changes in state rather than adhering to any system of physics could, given infinite time, simulate your entire life experience with exact precision. It could also eventually simulate a person or entity observing a simulation of your entire life experience.
In your examples you can easily generate the rest of the figure if needed without further simulation. Which is more efficient, no matter how expensive the operations are, and I’m not assuming they are expensive - just that there are a lot of them. Computing logical ANDs, ORs and NOTs is hardly expensive.
In logic simulation this is called implication. Say you have a chain of AND gates ending at an output, with massive fan-in to them of circuitry. If the circuitry in the fan-in cone doesn’t go to any other outputs, one 0 at the first AND gate can be propagated to the output without having to simulate anything else in the circuit. That’s kind of like assuming a single observer. If it fans out to multiple outputs you have to simulate more, which is like simulating multiple observers. I’m not worrying about the people doing the simulation here, but accurately simulating a world with multiple observers who would get confused if much of the sky blanked out because the simulated people the simulation writers care about can’t see it.
My use of event horizon was as an analogy, since your thing works if the observers in the box can’t see outside it. If they can you need to create the entire figure, even though you can do it efficiently.
When you are writing tests for your circuit in which one of the outputs should not be “OK” your model falls apart. And of course to know that the model should produce OKs you would need to simulate it or analyze it before writing your loop.
It’s kind of like the Stroud fault simulation algorithm, which I named when a designer at Bell Labs was talking to a guy in my group who was maintaining my fault simulator. “Make the output 100%,” he said, “they’re going to probably cancel the chip anyway.”
In my examples the inputs are fixed. When considering a deterministic simulation of reality, the model used does not necessarily need to hold any alternate set of inputs.
You speak of multiple “observers” within a simulation, but I remind you that those observations are fixed and can be short-cut.
Now I’m confused at what this thing is doing. If it knows the result in advance, it could just output “42” and be done with it. You’d assume that the inputs to any part of the model can vary, otherwise you can use well known logic minimization techniques to prune huge chunks of the simulation.
Is the input fixed at one point of time or forever? If the latter, it’s not really being simulated at all - and we don’t see things like this in our universe.
I was assuming a full range of potential simulation states. If we were simulating the Earth in a pre-Copernican universe with the stars fixed in the firmament I agree it would be practical. That’s just not the universe we observe.
I assume the model is/can be reduced to a fully defined multivariate space. It could be projected along an axis, for example time. Not unlike dragging the slider when you’re looking at a CT scan.