Evidence we're not living in a simulation

This is my first posting. I see that I should have picked an easier subject; most of this, the posts of you fine people, is way over my head (assuming, perhaps wrongly, that all present ARE people), but I look forward to the experience. I’m glad there is no I.Q.-level requirement. Mine is fairly high, but not high enough for this assembly of thinkers, I suspect. So, may I begin carefully and ask “How much do IQ scores really say about intelligence” ? The question of whether the universe is a computer, or other, simulation or illusion, shall we say, has been in my head since I was quite young. I haven’t much to add to this debate, and I rather doubt the notion, but I certainly cannot dismiss it. Can we ever know such a thing ? I think placing limits on whatever Reason/Science may uncover is always a mistake. I’m atheist, but wonder, sometimes, whether there IS a “god” and he is some sort of computer program, or programmer, used by some advanced civilization. This, to me, seems perhaps tied to our notion of time. There seems to be a common assumption that time is comprised of three “parts”, for lack of a better description; past, present, and future. I know of the past, having (I think) been there. And I’m pretty confident there is, (or will be) a future ,because that becomes apparent about every micro-second, but I am confused by the concept of the “present”, which is harder to define. Wave theory suggests, to me, at least, that there IS no now, now. Please, don’t bother being gentle even though I’m a new arrival. I’m a big boy and can take it.

Welcome to the Dope! I think you’ll fit in fine here.

A little friendly advice – blocks of text like your post can be hard to read. Try to divide up different lines of thought into different paragraphs/lines. Best of luck!

Thanks, iiandyiiii. I will. I don’t know whether I should Title my posts, but maybe
that’s not needed.

Also, I’m 60 and not very computer-experienced, so bear with me. Appreciate the

warm welcome !

Titling your posts are not necessary, except when starting a new thread.

I realize you’re being facetious, but sometimes, the idea that ‘quantum uncertainty’ is some way to save computation cost is presented earnestly; however, it’s got a basic flaw, which is that it’s actually extremely hard to simulate quantum mechanical systems on a classical computer. So, you don’t save anything by ‘quantum dithering’, rather, you will typically incur exponential slowdowns.

This is part of the reason that computers making use of quantum principles can compute certain things faster than any known classical computer—or, to say it the other way around, if there was no penalty to the simulation of quantum systems, one could just classically simulate the quantum computer efficiently, and quantum computers would offer no speedup.

Hi HollowMask and welcome. I’ve read that some physicists wonder if time is really fundamental, and that our experience of time is just an illusion as we travel through the time dimension. I assume they have some math behind it, but I sure don’t understand it. So, your wondering about what “now” means is very reasonable.
When I was at MIT Ed Fredkin, who was head of Project MAC, supported the universe as simulation idea. There is a chapter about him in a book called “Three Scientists and their Gods” (long out of print, I assume.) Whether the programmer is god depends on what your definition of god is.

Can you describe this in more detail? My understanding of quantum computers, admittedly limited, was that you’re creating a really massively parallel system, with zillions of particles each doing a bit of “computation”. I thought that was why certain problems are solved more quickly with quantum computers.

I also don’t understand why randomly determining the last bit of a simulation would be computationally expensive.

To be clear, I find the question of whether we’re in a simulation to be interesting but really completely unknowable and ultimately navel gazing, so I’m not seriously putting forward the idea that quantum uncertainty is proof of anything or due to dithering.

The ‘quantum parallelism’ thing is a very popular, but unfortunately misleading, explanation of how the quantum speedup works. Think about it: if all a quantum computer did was just to carry out multiple computations in parallel, how would you find out which one produced the right result?

What, actually, is responsible for the speedup is something we’re not quite sure about yet. All we know is that there are certain cases in which quantum computers outperform classical computers—a seemingly trivial example being the simulation of quantum systems, which is very hard on a classical computer (more on that later).

Perhaps an example might help. A well-known case in which quantum computers massively outperform classical ones is finding the prime factors of a number. Classically, this problem is very hard—so hard, in fact, that most current encryption algorithms rely on the fact that it can’t be done efficiently (but proposed solutions can be checked quickly, by just multiplying them and seeing whether they produce the right number).

However, in 1994, Peter Shor showed that it is possible in principle to use quantum systems to perform the computation much faster than possible classically. This is often where the ‘quantum parallelism’ explanation comes in: basically, the story goes, the quantum computer tries all possible factorisations simultaneously, and then pops out with the right one. But in reality, it’s a bit more subtle.

Basically, the problem of finding prime divisors can be reduced to the problem of finding the period of a certain function—that is, the interval in which the function repeats its value. That’s just classical number theory. This function is now expressed in a superposition of many different quantum states—that’s where the fact that a quantum system can ‘occupy multiple states at once’ (more accurately, the fact that for every set of accessible states of a quantum system, their linear combination is an accessible state, as well) comes in.

Here’s the crucial difference: in the ‘quantum parallelism’ picture, the right answer would be encoded in a single one of the superposed states, and there wouldn’t be any measurement you could do that picks out just that one (unless you knew the result in advance). But the period we’re looking for is now a global property of the whole superposition—and there, you can make an appropriate measurement to find it.

So the answer doesn’t end up in a single one of the computation’s threads, but rather is encoded globally in all of them.

Well, first of all, true randomness wouldn’t be computationally expensive, but actually impossible: there’s no way to algorithmically generate truly random numbers, since that would mean generating randomness deterministically, which is sorta a contradiction in terms.

But that’s not really the problem with simulating quantum mechanics: use a large enough seed, and nobody will ever tell whether you use true or pseudo-random numbers. Randomness only enters the picture upon measurement, where, if you’re inclined to believe in that sort of thing, the quantum state ‘collapses’ to one definite branch out of a given superposition.

Usually, however, the quantum evolution as long as no measurement occurs is perfectly deterministic (and hence, if you subscribe to some form of ‘no collapse’-interpretation, like Everettian many-worlds e.g., quantum mechanics is actually a deterministic theory).

The problem is with the size of quantum states: they’re huge, much larger than classical ones, so keeping track of them is a hard problem. To see this, consider how a classical bit is described: it can either be 0 or 1, and that’s it. Easy to handle.

A quantum bit, however, can’t just be either 0 or 1, but can also—as mentioned before—occupy values ‘in between’, meaning it can be in any linear superposition a"0" + b"1", where a and b are complex numbers such that |a|² + |b|² = 1. Now, each complex number corresponds to two real numbers, but you don’t need four real numbers to describe a qubit, since the constraint |a|² + |b|² = 1 removes one degree of freedom, and there’s a further freedom in choosing the phase angle that I won’t get into here. That, however, still leaves two real number degrees of freedom which are necessary to describe the qubit’s state, which is quite a bit more than in the classical case, where we had one number that can either be 0 or 1!

So, the reason it’s hard to simulate quantum systems is that there’s a lot of information to keep track off even in the simplest states (and the problem scales exponentially with system size, too). What’s more, basically all of that information is hidden: with a measurement on a qubit, you can extract only one single bit of information—same as in the classical case. The reason is, roughly, that the measurement ‘changes’ the state of the qubit—so if your measurement returned the value 0, the qubit will now be in state “0”, and every further measurement can only confirm that fact, or effect another state change—effectively, the additional information has been ‘destroyed’ by the measurement.

Half Man Half Wit, thanks for that lengthy explanation. To be honest, I understood the quantum computer bit much better than the simulation of quantum uncertainty bit.

Given my basic lack of understanding of the second part, this question is probably off-base, but here goes anyway. When you say it’s difficult to measure and keep track of quantum states like that, are you looking at it from the POV of a simulator or a simulatee? That is, would it be easier to generate the random quantum states from outside the system and just let the simulation keep track of those weights, only collapsing them when necessary, than it would be from inside the simulation to model them?

I know that I’m not really expressing myself well here.

(On the true randomness part, since we have no idea what kind of universe this hypothetical simulation is in, I think that’s not a strong objection. For example, we can generate true randomness from radio static, right? Sample a bunch of snowy televisions as inputs to the simulation, and Robert is my mother’s brother. Maybe the actual universe has plentiful truly random sources.)

I’m thinking from the point of view of a simulator, faced with the task of simulating a quantum system on a classical computer. This can’t be done efficiently, where ‘efficiently’ means in this context a certain relationship between the size of the problem (instance) and the time (in elementary computational steps) needed to carry out the simulation. To simulate a generic quantum system, that time scales exponentially with the system size.

If this were not the case, meaning that you could efficiently simulate quantum systems using classical computers, then of course you could not have any quantum speedup: just efficiently simulate the quantum system implementing some quantum algorithm, and you’re set.

The reason for this is simply that the quantum state is exponentially ‘big’, so to speak. Consider a classical system, which can be described by some bit string, “001001110…”. Say this is of length n. Thus, you need n bits to describe the state.

For a quantum system, however, you need two complex numbers for a given qubit. For a system consisting of more than one qubit, this is, contrary to the classical case, not additive, but multiplicative; the reason is, again, superposition: a two qubit system can occupy any state of the form a"00" + b"01" + c"10" + d"11", where again a, b, c, and d are complex numbers. Thus, you need 2[sup]n[/sup] complex numbers to describe a quantum state of n qubits, rather than just n times either 0 or 1.

All of this information has to be kept track of in a computation, and all transformations of the state must be computed for all those numbers. This is simply a much bigger amount of data to process than in the classical case; this accounts for the computational overhead.

I’m not really sure I get what you’re meaning here. The randomness is not really a factor in the computational complexity of simulating quantum systems; you can just roll a die or, as you say, sample some real (outside) world randomness to make a decision. This can be done efficiently. Or are you asking whether it just might seem to us on the inside that simulating quantum mechanics is hard, but that it needn’t be the case for the simulators?

If so, I guess that’s true—the outside world might work according to completely different laws, in which it’s indeed simple to simulate quantum mechanics, i.e. which relates to quantum mechanics similarly to how quantum mechanics relates to classical mechanics. To such an outside world, simulating a quantum world could indeed be a measure of ‘cutting the costs’, but not because there is some limited exactness or resolution up to which the world is simulated.

However, there has actually been some investigation into possible laws of physics that permit even stronger computers than quantum mechanics, and IIRC, it seems that quantum mechanics occupies a fairly special place in the landscape of computation; if you get even more computational power, pretty much everything becomes trivial, and who would want to live in such a world?

Yes, that’s true. But even if you were unwilling to engage in such sampling, or say if the outside world is perfectly deterministic, you could engineer things such that those inside never could catch on to the fact that you’re not using ‘true’ randomness.

Thanks Voyager ! I feel a little less dumb, now. MIT ? I wanted to go there, too, but mainly 'cause

it’s easy to spell. The question of Time is really a complex one, but so interesting even a slug like myself can’t stop pondering it. I’ve read some who propose that past, present, and future all exist

at once and we are merely passing along or thru it as would a wave, which, as you know isn’t a

“thing” so much as an energy. Multi-verse theory also seems promising and would not conflict, I

think, with the idea. Yours and other comments on this subject fascinate me and these are the main reasons a I joined. I don’t know a singe person who can converse, intelligently on stuff like this. I’m an ordinary shmo with not a whole lot of education. Thanks for taking the time.

Probably the strongest evidence for a simulation would be the phenomenon of socks missing in the dryer.

It seems to violate conservation of energy.

Half Man Half Wit, thanks again for your detailed explanation. I think I need to read it a few more times to really get it, so maybe tonight. It’s been a long time since I took any Physics classes.

Good point. Not that I claim to understand this stuff, but I would argue that there is merit to my little joke because quantum states truly are probability functions. I think your point, however, can be expressed by saying that simulating mere randomness doesn’t capture important aspects of quantum behavior, like entanglement and tunneling. Nevertheless, whatever a “good” model of quantum behavior might be, it would have to be a non-deterministic one that simulated a wave function and not a discrete particle like in the macro world.

As an aside, [pseudo-]random number generators are of course common in real-life simulations, one of the applications being to provide inputs about processes that are below a practical level of modeling granularity, like small-scale circulation perturbations in climate models.

Algorithmically, no. But with an external input (like the lower few bits of a fast-ticking clock as the seed) it would effectively be random. Which then leads to this argument: that if the external input was the measurement and consequent collapse of quantum states (the thing that may or may not have proved fatal to Schroedinger’s cat!) then the number generator is truly random in the sense that it’s completely non-deterministic not just in practice but in theory.

How can we possibly know that quantum evolution is deterministic, and what practical meaning does that even have? That strikes me as being the same kind of question as “are we living in a simulation?” which is ultimately more philosophical and metaphysical than scientific. A practical implication of quantum uncertainty IMHO would seem to be the fact that no matter how much information you have about the state of the universe, you can never exactly predict the future because the “state” ultimately descends to the quantum level – random electron state transitions and particle decays and so forth.

A “no-collapse” interpretation as in Everett’s “many worlds” doesn’t really help unless you drastically redefine what you mean by “universe”, because quantum states still collapse into unpredictable states in this world!

Any sufficiently complex system is indistinguishable from Magic.

Are these the same idea ? Is system and simulation a simile ? Not trying for the alliteration, here ( sure, I’ve been waiting to use that word for 45 years).
Matching Magic with Reality seems much harder.
Does the magician really saw the lady in half, or does it just look like it from our
point of view ?

We can indirectly measure it, by measuring interference effects. I’m not sure I can think of a good way to explain this, but I’ll give it a try.

First of all, in quantum mechanics, there are two distinct ‘modes’ of evolution, one being the ‘ordinary’ quantum evolution following the Schrödinger equation, the other being the ‘collapse’ of the wave function to a specific state in the course of a measurement.

Now, the determinism of the quantum evolution (with which I’ll always mean the ordinary, Schrödinger, evolution) follows directly from the properties of the Schrödinger equation—but how do we know that’s correct? After all, whenever we measure, the system doesn’t evolve according to it, but collapses!

Two points can be made here. One is the requirement that we always want the sum of all probabilities of what we could observe to be one—this just means that something must definitely happen. This implies conservation of information, which in turn implies determinism—if the state at some later time contains all the same information as it does at an earlier time, then knowledge of the later state suffices to reconstruct the earlier state, and vice versa; thus, the evolution is exactly predictable (and retrodictable). It’s only in the measurement context that information is thrown away.

But there’s also a more physical reason for the determinism of the usual quantum evolution, and that’s given by interference effects. These are something we observe: if we direct, say, a stream of electrons at an aperture with two openings, then we will observe a characteristic interference pattern behind the openings (see here for an example of the interference pattern building up).

And in fact, in order to explain this, we must assume that the evolution of the electrons before getting measured, yielding the interference pattern, must be deterministic. Essentially, the reason is that you must ‘keep’ all the components of the wave function, such that the appropriate components may mutually reinforce or cancel each other. If you ‘throw away’ some part of the state, that is, if you make an irreversible transformation on the wave function, then this cancellation/reinforcement can’t happen anymore.

Perhaps I should say something about what I mean by ‘throwing away’ parts of the state. Basically, the quantum state can, in general, be considered to be a sum of components, each of which correspond to some ‘definite’ state of affairs; think about that (in-)famous cat, which is ‘both dead and alive’ (which is somewhat of a gloss, really; it would be as accurate to say that it’s neither, and perhaps just to say that it’s in a quantum state that has no direct classical analogue). Indeterminism creeps in whenever we reduce this sum to just one or a couple of its constituent terms: then, since the prior state can’t be reconstructed from the state after the reduction, we have a loss of determinism. We can neither tell to which state a system is going to evolve in such a reduction/‘throwing away’ process, nor from which one it has evolved.

This is now what happens in measurement: from the superposition, i.e. the sum of components, one (or more than one) is selected to become the ‘actual’ one. From this, we now can’t tell what the state was before measurement, and neither can we tell, knowing only the pre-measurement state, what it’s going to ‘jump’ to on measurement (except with a certain probability).

However, in order to explain effects such as interference, we must model the quantum state as smoothly changing, in order to capture the delicate cancellation/reinforcement effects; this forces a deterministic evolution rule, as from the state at each point during the evolution, we can deduce the state at every other point.

Nevertheless, there are proposals to get rid of determinism at this level, such as the GRW-theory. In this theory and others of its kind, collapse doesn’t just happen during measurements, but always has a small chance of occuring; that chance gets increased the more ‘macroscopic’ a system becomes, such that things at the level of our everyday experience pretty much always collapse, and thus are classical.

But note that this is an alternative theory to quantum mechanics, which makes empirically distinguishable predictions. Currently, there is no evidence for one in favour of the other; no ‘spontaneous collapse’ and the attendant loss of interference effects has ever been observed. But it remains a possibility worthy of investigation.

So perhaps it would be more accurate to say that we know it’s deterministic up to a certain, ever increasing, level of precision, which is really all you ever get in science anyway.

Thanks for the very detailed explanation, which I’ve read through at least three times. I’m not arguing the point at all but rather trying to clarify it in my simple mind, and I suppose I’m using the “devil’s advocate” approach to do so. I would first ask if you agree with this statement:

The bolded part in the quote seems to contradict the bolded part of your statement, though maybe I’m misunderstanding yours.

I don’t see the interference effect as “proof” of determinism at all, but rather just evidence of the fact that quanta are neither classical particles nor classical waves, but rather things that have superpositional properties described by a probability wave function. So naturally, on average, you would get more or less the same interference pattern over time in a dual-slit experiment, and the longer you ran the experiment, the more consistent the pattern would be. (I’ve also seen the interference paradox used as “proof” of Everett’s “many worlds”, which is just silly. Not that many-worlds is silly, just that this sure as hell isn’t proof of it!!)

To re-iterate the fundamental question: If you had a collection of quanta in a fixed space, would the state at some future time be predetermined – such that if you had exactly the same collection in exactly the same states somewhere else, it would evolve to the exactly the same state as the first collection? You appear to be saying that the answer is provably “yes”.

I have yet to be convinced that my [unfounded gut-level] belief is wrong – that belief being that the question isn’t even meaningful because “exact state of quantum particles” is a contradiction in terms. That at the quantum level of the universe, God is running a bunch of random number generators, whose results are probabilistic and therefore approximately predictable* on a large scale*, but not on very small scales.

And moreover, that small-scale quantum effects can propagate upward and potentially create unpredictable effects, say, in individual molecules. And that this can determine things like, say, whether an individual toxic molecule passes harmlessly through someone’s body, or happens to trigger a cascade of events that creates a cancerous tumor that causes someone to die at a young age rather than live to an old one. And that it’s utterly, fundamentally unpredictable – or, if you prefer, that it’s unpredictable which of Everett’s worlds this particular errant quantum will collapse into – no matter how much information we have. It’s a view that is philosophically satisfying, at least! :slight_smile:

(Apologies for the verbosity…)

Reasonably well.

Both are talking about different things. The information loss I mean is in the collapse of the wave function from a superposition of many ‘possible states’ to one single definite state—which only happens during measurement. The uncertainty principle also applies to measurement, but moreover to the simultaneous measurability of certain quantities—the knowledge of one precludes knowledge of the other.

Basically, the state of a quantum system—at least as described by the formalism—does not always yield a definite value for a certain quantity. Take position: the wave function may specify only that an electron is here or over there. Only upon measurement (on the standard interpretation) does the electron have to decide. Let’s say we have found the electron here.

In one sense, we have gained information upon this measurement: we now know where the electron is. But in another sense, something has been lost: the process of the electron transitioning from being ‘here or there’ to being ‘here’ can’t be played back; we can’t reconstruct the state of the electron as it was before the measurement from what we now know. This implies indeterminism: whenever the state at some point in time isn’t enough to compute the state at all points in time, we can’t perfectly predict the behaviour of the system.

Now, the uncertainty relation is about the information we can say we have gained upon measurement. Intuitively, the state of an electron in which it is ‘here or there’ is (or may be) also a state in which it is ‘this fast’, i.e. while the position is undetermined, the momentum (or velocity) is completely well defined. However, if we measure, and find the electron ‘here’, then the state in which the electron is ‘here’ is also one in which it is ‘this fast or that fast’—i.e. now, while we perfectly know position, momentum is undetermined.

So, the uncertainty relation is about the knowledge of the electrons properties, while indeterminism and the question of information conservation/loss is about the way the electron’s state changes over time. When it does so continuously—what I have called ‘Schrödinger evolution’ previously—then at any given point in time, I can use the state to compute any earlier or later state. But when it changes discontinuously—when ‘here or there’ becomes ‘here’, as it does upon measurement—then the evolution can’t be played back or forward like that anymore, and the state after measurement does no longer contain the information of what the state was before.

At any given point in time, however, the state is governed by an uncertainty relationship—if we have full information about its position, then we have no information about its momentum, and vice versa. Nevertheless, this state itself can evolve perfectly deterministically.

Moving on to the point about interference and determinism. In principle, you could be right, and interference effects could be only approximate, i.e. the interference pattern might look only similar in each installment of the two-slit experiment. But there are other experiments you can do which depend on exact cancellation, thus showing that during the evolution of the quantum state through the experimental setup, no information can be lost.

The most prominent such experiment is the so-called Mach-Zehnder interferometer, a schematic realization of which can be seen here. Light enters from the left, is partially transmitted, partially reflected at the first beam splitter, then upper and lower paths, reflected by the two mirrors, recombine at the second beam splitter and impinge on either of the two detectors.

This experiment can be done with single photons: they enter from the left, and at the first beam splitter, are either reflected or transmitted with a probability of 50%. Since quanta are fickle, however, each photon enters into a superposition of ‘being reflected or being transmitted’, that is, it goes from ‘moving right’ to ‘moving up or moving right’.

Now, the ‘moving up’ and ‘moving right’ parts encounter a mirror, and thus are transformed into ‘moving right’ and ‘moving up’, respectively. These two parts then again meet at the second beam splitter. There, the ‘moving up’ part is again either transmitted or reflected—becomes ‘moving up or moving right’. Likewise, the ‘moving right’ part becomes ‘moving up or moving right’, as well.

However, and this is the key point, due to the fact that reflected waves suffer a phase change at a fixed boundary (a crest is transformed into a trough, and vice versa, like e.g. here), the two ‘moving right’ parts are now oppositely aligned—they cancel each other out—while the two ‘moving up’ parts reinforce one another.

Thus, the upshot is, if you do this experiment with single photons, only the upper detector will ever click—this is the simplest realization of interference. The right detector corresponds to a ‘dark’ band, and the upper detector to a ‘bright’ one. And crucially, this would not be possible if any component of the quantum state were ‘dropped’ at any point during the experiment—if anything, any information at all, were lost, there could neither be complete cancellation nor complete reinforcement, and both detectors would register the photon some of the time. Hence, the evolution of the quantum state through the interferometer must be perfectly deterministic; any violation of determinism would lead to a count rate at the ‘dark’ right detector that couldn’t be explained by measurement inaccuracies.

Moreover, if at any point, we introduce a measuring instrument into the path of the photon, to determine whether it is ‘actually’ taking the upper or lower one, then the interference effect will vanish—consistently with the attendant ‘dropping’ of the part of the wave function that is not consistent with the detection, that is, with ‘upper path or lower path’ becoming (say) ‘upper path’ definitely.

(Perhaps I should add that what I’m saying is not controversial—that the ordinary evolution of quantum systems is deterministic, with any possibility for indeterminism introduced during measurement, has been the ‘received view’ ever since John von Neumann codified modern quantum mechanics in his book ‘The Mathematical Foundations of Quantum Mechanics’. See, e.g., wiki.)

There are exact quantum states, however, those states don’t exactly determine all of a quantum system’s properties. For a classical object, you can simply write down a list of all its properties, which is equivalent to the state of the object. But in the quantum realm, such a list isn’t possible: only certain properties can be well-defined together at the same time. This is sometimes expressed as ‘there are no dispersion-free quantum states’. Here, dispersion roughly means the error one has about the value of some property.

But even such states can be exact, and evolve deterministically. Consider as a classical analogy a probability distribution over a certain set of values—say, concerning the position of a particle. It’s here with some probability, there with another, etc. The position of the particle is not exactly defined, but the probability distribution itself is. That’s roughly similar to the quantum state, with the only difference being that in the classical case, you can, through measurement, eliminate all uncertainty. Note that here, measurement yields a similar discontinuity: from knowing just that the particle is ‘here’, you can’t anymore gauge what the likelihood was that it shows up ‘here’. (From one single throw of a coin that comes up heads, you can’t tell whether the coin’s fair or not.)

Thus, the quantum state evolving deterministically, yet not permitting perfect knowledge of all properties of a quantum system, is not in contradiction.

Stephen Baxter dealt with the possibility in Touching Centauri. The basic theory was that, if humans specifically are the subject of a simulation, you don’t really need to simulate the entire universe. Heck, until the 60s you didn’t even need a real moon, just a bright apparent light in the sky. You only need one that acts solid when we bounce signals off it and decide to land on it. The idea is they’re trying to keep the processing power to a minimum, and a way to detect this is to push the simulation to the limits. So they decide to bounce a powerful laser off one of the Centauri planets.

They set up a super advanced detector to sense the handful of returning photons ~8 years later, and none came back because compared to simulating the solar system, trying to solidly simulate a region of space encompassing light years uses so many more orders of magnitude than our small region that it breaks the simulator.

These arguments don’t really disprove the simulation theory when examined closely. However, even if we are in a simulation, this simulation is occurring in a real universe, and real universes have consequences, whether we believe this experience does or not, if it was a simulation, it would still operate under the “watchful eye” of a real universe and be subjected to those laws.
So really, it doesn’t matter if it’s a simulation in the big picture, because simulations, by definition, can only happen in real universes, and real universes don’t care if you’re a simulation, because to them, you are real.