Can you summarize the incompatibility?
The very quick version is: no one knows how to put different curved spacetimes into superposition. A particle over here curves space differently than a particle over there, and if the two particle states are in superposition then so must be the two spacetimes. But the math doesn’t work out.
Wiki’s article on Quantum gravity - Wikipedia says that a big problem is renormalization: making intractable infinities in the math go away.
There’s also the “Problem of time”
Renormalization is sort of a dirty trick in the first place. The idea is that if you get infinities in your theory, maybe you can get rid of them by assuming there’s some cutoff for distance or energy or something.
That makes sense if you have an idea for what exists below that scale. Like if you had a theory of gases that only worked above a cutoff. That’s fine because we know that at small scales, gases are just atoms bouncing around and are no longer smooth. So we should expect the smooth theory to break down.
But is a cutoff justified when you have no idea what goes on below it? Maybe. The math works. It’s just a bit suspicious. And anyway, the trick doesn’t work for GR.
Really? So we should just throw away all of differential calculus because it is fundamentally based on a dirty trick?
That’s absolutely not what he said.
But differential calculus would not work without the type of renormalization it uses.
Renormalization goes a step beyond that. It’s more like concluding that 1-1+1-1+1-… = 1/2, because if you take the limit of 1-x+x^2-x^3+x^4-… as x->1, you get 1/2 (since for all x<1 it converges to 1/(1+x)).
That’s actually true in a sense, but it’s sorta odd. Not like the convergence we see in integrals.
Understanding of renormalization is not stuck at the 1960s level and I would not dismiss it as dirty chicanery, but the point is that quantum gravity is not renormalizable in the standard quantum field theory sense.
For the record, calling it “sort of a dirty trick” doesn’t mean I think we should abandon renormalization or that it’s invalid. It might be a sign that there’s new physics at a higher energy scale. Or it might not. Nature doesn’t owe us clean explanations.
Perhaps the most famous conjecture in math–the Riemann hypothesis–depends on analytic continuation, to the point that even the “trivial” zeroes appear to produce nonsense results, such as ζ(-2) = 1 + 4 + 9 + 16 + … = 0 (i.e., the sum of the squares of all natural numbers is 0). Of course mathematicians have been comfortable with this for well over a century and the ideas have all been worked out robustly. But applying this strange version of equality to physics is a little less comforting.
I suppose I should add that I’m to some extent conflating the ideas of regularization with renormalization. They’re closely related but not quite the same thing. Regularization is (handwaving a bit here) “apply a cutoff which you then take to infinity/zero, allowing you to sum the divergent series” or sometimes “relate the divergent sum to one that has been analytically continued to divergent regions, such as with the Zeta function”. Renormalization is “subtract off an appropriate counterterm from each term so that the whole thing converges, which is totally ok because you weren’t allowed to measure the original series in the first place”.
It should perhaps be noted that (at least some of) the infinities that seem troubling in quantum field theory actually already arise in the classical treatment—the self-energy of any classical point charge is divergent. So it’s not clear that this is intrinsically a problem of QFT.
One point of view is then to consider the quantities present in the usual formulation of the theory to be ‘bare’ quantities (something like what e.g. the mass of the electron would be if we could turn off the electromagnetic interaction, which we can’t), which aren’t physically meaningful; so you have to use appropriate additional quantities—counterterms—to replace these with the quantities you’d actually expect to observe in experiment. This subtraction is the renormalization procedure, which then is just basically bringing these fictitious ‘bare’ quantities in alignment with observable reality.
But the problem is that this procedure generally yields expressions of the form \infty - \infty, which is ill-defined. To get around this, one introduces a cutoff scale, such that the calculated quantities are finite, and takes that cutoff to infinity after having done the subtraction—this is the regularization procedure, with the cutoff scale being the regulator.
Now, all of this is a perfectly rigorous scheme, provided that there actually is some physical cutoff present—such as when you’re doing statistical physics, and neglecting the fact that your description is valid only up to the scale of individual atoms, roughly. So if QFT is some effective theory valid up to, say, Planck distances, with the full high energy physics given by some different theory, strings or loops or what have you, there’s nothing dodgy about renormalization (this is sometimes called the ‘effective field theory’-view).
It may be less clear what to make of these notions if QFT is supposed to be a fundamental theory, although also in this case, understanding of renormalization has greatly progressed thanks to the notion of the renormalization group, which essentially yields a kind of ‘self-similar’ behavior of the theory across various distance scales, so that the cutoff distance doesn’t really matter. But of course, the difficulties bringing gravity into the fold here may already be enough to persuade one that QFT might not be the final word.
As for gravity, while it’s not renormalizable as a quantum theory in the usual sense, that doesn’t necessarily mean that no such theory exists—but in effect, you’d end up with infinitely many quantities you’d have to renormalize, each needing some experimental input to obtain the real physical quantity. That makes such a theory useless for any practical purpose.
But it might yet be that there are some ways around this difficulty—certain theories exhibit a property called ‘asymptotic safety’, where the renormalization group has a fixed point, getting rid of unphysical divergences. This is similar to the asymptotic freedom of quantum chromodynamics (the theory of the strong nuclear force), where that fixed point is just the trivial one of 0—meaning, the theory essentially becomes noninteracting (‘free’) at high energies, thus getting rid of the self-interaction troubles.
This misrepresents analytic continuation a little. The process is like:
- Here is a series of functions that converges only for a part of the complex plane.
- But analytic functions on the complex plane are constrained in such a way that there can be only one that is equal to the series where it converges.
- Therefore the series defines a function on the whole complex plane despite only converging for part of it.
No one goes back to the original series and says that because of the analytic continuation that the original series actually converges in a meaningful way outside of its actual domain of convergence. Or at least if they do that’s a fairly non-standard analysis.
Perturbative renormalization calculations seem rather technical (or “dirty” as Dr.Strangelove puts it), but that could be down to lack of mathematical sophistication, let’s say mine but maybe more generally. That is not saying anything meaningful, but maybe indeed there are some ways around it. In addition, gravity can be treated as an effective field theory.
Well yes, you can model space-time as a spin foam (or similar) rather than smooth, because why does it have to be smooth on Planck-length scales?