What do you think about the simulation theory?

The presumption that morality goes out the window if your situation is ‘fake’ doesn’t apply to simulations of the type being described, because to the residents of the simulation their reality is real, in every sense. The table in front of you is a table, despite actually being a bunch of undifferentiated subatomic particles, because to you it behaves as a table in all ways. The people they interact with are entirely real, in a literal and functional sense. The point of this type of simulation is that the actions being ‘simulated’ are actually happening to the simulated entities - the entities are indeed changing as a result of their interactions with each other.

Now, you could argue that being in a simulation means that it could be turned off at any time, causing reality to cease to exist. But, statistically speaking, that will never happen. After all, there can be only one second where reality ends, and there have been over 170000000000000000 seconds so far during which reality has not ended. Which means that the odds that any randomly-selected second will be the last one are infinitesimally low, and getting worse all the time. So, it’s reasonable to conclude that the next upcoming second will not be the end, nor the one after that, nor the one after that…

Science probably wouldn’t work in such a world, though - at a minimum the brain would have to store an ever-expanding collection of stored historical knowledge so that things are consistent going forward. The simulation only has to render the side of the tree you’re looking at, sure, but if you walk around and look at the other side, when you get back to the front the universe would have to remember what it looked like before and recreate it. Everything has to be remembered, for object permanence to be a thing.

I figure that doing that for everything every single person does and sees would fill up a Jupiter brain pretty quickly.

Yeah, I’ve rendered a few trees in my time; they do require a lot of processing. There are many more trees on the Earth than there are human beings.

But trees are significantly less complex than human minds, especially at the level of detail a human can perceive, so I’d not expect permanence to be a real problem.

But human brains are a sloppy mess, and their ability to keep facts straight is spotty at best. (Cite: Seen the news lately?) So I think that science in dreamland would still suffer quite badly, to the point that it would never even have been developed as a discipline.

Who said that?!

Computing power may not be the issue, since you can slow down the simulation as much as you want, and no one inside the simulation will be any the wiser.
The real problem is computing power in the sense of the energy required. Computation takes power. Simulation of the universe takes lots of power. Simulating another universe inside that simulation would take even more. And so on. That pretty much wipes out the probability argument - there can’t be that many levels of simulation because the top level couldn’t supply the power for them.

This implies that there is another level of code running on top of the simulation monitoring the simulation to figure out when this was needed - and making sure that the newly created models are consistent for all observers. Even more interesting if the simulation includes other intelligent species far away from us. Any one would have to have their simulation modeled to see a new supernova, for example.
I’ve written lots of simulators for computer architectures and digital circuits, and it ain’t as easy as you think.

I don’t think it’s easy at all. I think it’ll be possible with the kind of computing power we’re likely to have assuming our civilization doesn’t hit a great filter any time soon.

I agree. There are a lot of problems with the simulation theory. What would be the odds that I would pick the one I did, and you would pick the one you did, phrased exactly as we have phrased them? They’re astronomical against, so it must never have occurred.

Anyway, I was simply addressing the theory directly which posits that since computational power has been increasing it is inevitable that it must rise to the level of doing high-fidelity ancestor simulation.

The same flawed argument occurs in artificial intelligence research. Since AIs are growing smarter by leaps and bounds (oh god, if people only knew how dumb AIs are…) then it is inevitable that general artificial intelligence (GAI) must occur, and even superhuman GAI. However, it ignores that they are innumerable reasons why it may not occur.

It seems evident to me that the easiest and the sanest way to simulate the universe would be to simulate the entire thing from the very start, in as full of detail as you’re ever going to simulate it. Because if you do not do this then your simulation has to be prepared to upgrade the data (so to speak) as the simulation’s agents focus their attention on the data in question - and these upgrades have to be retroactively consistent with everything that has happened in the past. Once we discover molecular chemistry, all the random stuff that was not driven by chemistry previously have to suddenly be upgraded to have molecules in the right places based on the various non-chemically-driven fires and erosion and rusting and stuff that was randomly happening previously. Even if you only have to upgrade specific parts of reality as they are examined, if your random events that occurred previously don’t match up with the expectations driven by the new system, then the new system won’t work.

Expecting this to work out would be like expecting that if you dumped out a box of random colored beads out that they would randomly fall into place to form a perfect picture of the Mona Lisa.

Of course to simulate the entire universe you need a vast amount of storage. And processing power, since the theory that assumes these things are being created by people on purposes presume, which implies that they want the simulations to run long enough to serve some sort of purpose within their creators’ lifetimes. Arguably the most straightforward and effective way to acthieve this would be to cordon off an area the size of our entire universe and store the data in arrangements of interacting particles there.

Hmm, I don’t buy this. Sure, when I wake up I become much more aware of the inconsistencies in my dreams, but when I’m asleep and dreaming I don’t find it anywhere near as convincing as reality. For one, I don’t think I’ve had a dream where I truly believed I would die from whatever bad thing was about to happen to me. It’s always been more like watching a scary movie, complete with abrupt scene changes.

Yeah I like this. If there are simulations run in some universe, then why can’t it be us who run the simulations and the real universe is this one? Highly unlikely? Ok. But it’s also highly unlikely that any of us finds ourselves here and now in the first place.

So far we’ve only examined a miniscule fraction of the universe in detail. The planets orbiting Trappist-1 are still a mystery to us. Why would we need to model planets in the Leo Supercluster?

Because unless there is a video-game-style invisible wall that prevents us from not just going there, but looking at it with ever-superior telescopes and other sensing equipment, then suddenly that little dot of white paint on the rigid dome of the firmament is going to have to be swapped out with an actual solar system in a damn hurry.

(Plus, after a certain point, it’s just easier to dump a few rocks out there, just to fill out the terrarium while you play with the monkeys on your preferred rock.)

That would be an answer to the OP, of course. As we examine the universe in much greater detail, the amount of detail required for the simulation would expand, until it is either fantastically large, or running incredibly slowly, or both.

We could also determine whether we are in a simulation by building our own matrioshka brain, which of course the simulating entity would need to simulate in turn.

Simulation theory is basically externalized solipsism. It’s just a question of whose imagination we’re all a figment of. In either case it’s a vague and unfalsifiable hypothesis, and thus utterly pointless.

3.04 trillion trees, I read somewhere. That’s a lot of trees. But if a tree isn’t rendered properly in a forest and no one is there to see it, does it take up bandwidth?

I don’t know if it’s reasonable to require that every particle in the universe get simulated from big-bang until the first sentient life. If the stated idea is that this is an ancestor simulation, then we need to start at some point when there are ancestors. The Earth can be rendered in great detail from stellar creation, and then populated by creatures at whatever point they want.

If we’re the only sentient creatures in a simulated universe, then the stars can just stop after a certain point, like if they were projected on a crystal sphere. Once something is seen, write it down so it doesn’t conflict. I know that this is absurdly complex, but absurd complexity isn’t impossibility.

And as it happens, the rules to our simulation pretty much preclude visiting anywhere outside our particular sphere.

A simulation might not need to faithfully simulate creatures. It could make up its own.

Maybe dogs only exist in the simulation.

In which case, I’ll take the blue pill…

I intended to post this, pretty much exactly as written.

Equally as useless as last-Thursdayism, because it changes nothing.

Maybe one of us is a simulation of the other. Or we’re simulations of each other. WE’RE DOWN THE RABBIT HOLE NOW.