Could quantum physics ever be able to predict the future?

There is only one way to know. Make predictions using QM, then wait to see if they come true.

I have no idea how accurate Minute Physics is, but this is right on topic: http://www.youtube.com/watch?v=dmX1W5umC1c

Assuming their explanation of quantum physics is correct… no, we can’t make predictions now and will never be able to, only forecasts.

Minute Physics is actually pretty good. I haven’t spotted any errors in any of the videos I’ve seen/used. So far at least.

Deja Vu is a glitch in the Matrix… Larry Fishburne says so! :dubious:

Which ones do you have in mind? I don’t see the relevance of eternal inflation, and as for the rest… Well, Chronos essentially said what was to be said in the thread about that.

To the OP, there are indeed ways to interpret quantum mechanics as being informed by the future as much as by the past, but, apart from the originators of these interpretations, these views haven’t gained much traction in the community as a whole. More importantly, they are (for the most part) observationally equivalent to regular quantum mechanics (as all proper interpretations must be), so they don’t provide any means by which to predict the future other than the usual probabilistic predictions made by regular QM.

Perhaps the most widely known such interpretation is John G. Cramer’s transactional interpretation, in which what happens between event A and B, where B is to the future of A, is determined by the superposition of an ‘offer wave’ originating at A and a ‘confirmation wave’ originating at B; waves before A and after B cancel each other out.

A similar view is shared by Huw Price, whose main argument (I believe) is related to the fact that retrocausality can be used to explain the ‘spooky action at a distance’ seemingly demonstrated by entangled particles across spacelike separation: if both particles are (causally) informed not only by their source A, but also by an event B lying in their shared future, then it is no wonder that particle 1 may influence particle 2, since particle 1 may simply by ordinary causality influence B, which lies in its future, which then in turn retrocausally influences particle 2. Whether or not there is anything gained over just viewing the entangled pair as nonlocal to some appropriate degree is, I suppose, a matter of taste…

The third such view I am familiar with is the Aharonov/Vaidman two-state-vector formalism, wherein each system is attributed not one, but two states, one of which evolves according to the usual direction in time, from past to future, while the other one evolves from future to past. In their view, the usual quantum quandaries are simply an artifact of using only the past-to-future evolving state to describe a quantum system, while the complete information is obtained only by considering both states. the problem with this view, however, at least in my opinion, is that it lacks explanatory force: usually, the future is explained as being caused by the past, so the reason why such-and-such happens in the future is given by the way things were. But in the two-state-vector scheme, both past and future are equally fundamental, so ultimately, it seems as if things are the way they are because they are the way they are. This leaves no questions open for sure, but also seems a little bit intellectually dissatisfying.

You really have to add 'as far as we can tell." Although Bell’s Inequality does limit the number of ways a hidden variable system might work, I believe that if you’re willing to toss locality under the bus, you can still get perfect predictability (if you could get at the hidden variables).

I’ve also seen at least a partial solution using intertwined riddled basins as the main concept. For those not familiar with these nasty little bastards, an intertwined riddled basin is a system of formulas where two or more attractors are tied up with each other so tightly, that if you are off by even a tiny smidgen from the original point, you can’t know which attractor you will eventually land in. The general gist is: since we cannot perfectly measure anything, we are always a smidgen away from what we wanted to measure and we get something that looks like its random, even though everything is perfectly deterministic.

As far as I’ve been able to glean, the intertwined riddled basin has not been ruled out yet.

However, the practical upshot is that things are still unpredictable, because we either are not allowed to look at the hidden variable, or we are not allowed to measure perfectly, even theoretically.

But you can still make statistical predictions.

As for the OP, if there is no such thing as time, there is no such thing as “the future” to predict.

That’s the way it used to be as far as anyone knew, but in fact, there’s a recent result by Colbeck and Renner establishing that there is no theory that both reproduces quantum mechanics to the extent it’s confirmed, and gives (on average) more exact predictions. Here’s a somewhat pedagogical discussion of the matter.

There are also some related results by Montina and Pusey, Barrett and Rudolph, showing respectively that any hidden-variable theory needs at least as many parameters as quantum mechanics to characterize a system, and that in fact wave functions and hidden variable states must necessarily be in one-to-one correspondence, thus establishing that quantum theory cannot be understood as the ‘statistical’ theory of some more fundamental realistic theory (such as, for instance, Liouville mechanics can be understood as the statistical theory of classical phase space mechanics). The latter ‘PBR theorem’ especially has attracted much discussion recently.

A quick perusing of the result seems to indicate that they indirectly make locality a prerequisite by insisting on Freedom of Choice. I don’t know when I’ll have time to really dig at this new result to see if the assumption and the resulting conclusion is justified. Can anyone sum this up?

Déjà-vu (along with jamais-vu and presque-vu) have already been explained. It’s “just” wires getting crossed, a glitch in your brain which has a hiccup when processing current sensory input & storing it in short-term memory then processing it again a microsecond later, getting all confused and falling back to treating it as information extracted from memory instead of current sensory input in a desperate bid for coherence/seamless experience. Live bug-fixing, McGyver style.
Or something like that, I forget the details and I’m no brain surgeon to begin with - my point is, the phenomenon lies inside your own internal time-meter, not in the structure of time itself. It’s fascinating, but not in the way you seem to think.

Freedom of choice doesn’t imply locality – it just means that the experimenter’s choice of measurement is not predetermined. This means that there exists a ‘superdeterminism’ loophole, which also exists for Bell tests, meaning that stipulating that the experimenter is not free gets you out of the conclusion that local realistic theories are incompatible with QM; but the experimenter can be free or unfree in both local and non-local frameworks.

CoastalMaineiac, if you happen to have Amazon streaming, Nova has a four part series called ‘The Fabric of the Cosmos’

They do a really good job of breaking all your questions down for the lay person.

My personal favorite was series 1: “What is space?”

The more you think about it, the further down the rabbit hole you fall.

Is it turtles all the way down ?

I’m adding it to my watch list. Incidentally, it looks like you can watch it free from PBS, for anyone else who’s interested.

Sounds about how I originally understood things as well. But I thought current thought assumes a direct link between freedom of choice and locality. Even the originally linked result seems to imply this as well.

How do you figure? Their assumption of freedom merely concerns the factorizability of probability distributions, i.e. some event A is free wrt some set of events G if P(G,A) = P(G)*P(A), meaning that knowing all about G doesn’t give you any knowledge about A. This doesn’t make any reference to space-time structure; the authors do introduce a causal structure in order to define the relation ‘event X may be a cause of A’, but they keep it completely general, i.e. their causality may be nonlocal, atemporal, even retrocausal in principle. So I don’t see any hidden locality assumption at all…