New Physics in Muon Experiment?

This article

says:

“The kind of precision that these people have managed to attain is just staggering,” said Dan Hooper, a theoretical cosmologist at the University of Chicago who was not involved in the work. “There was a lot of skepticism they would get here, but here they are.”

But whether the measured g-2 matches the Standard Model’s prediction has yet to be determined. That’s because theoretical physicists have two methods of computing g-2, based on different ways of accounting for the strong force, which binds together protons and neutrons inside a nucleus.

The traditional calculation relies on 40 years of strong-force measurements taken by experiments around the world. But with this approach, the g-2 prediction is only as good as the data that are used, said Aida El-Khadra, a theoretical physicist at the University of Illinois Urbana-Champaign and a chair of the Muon g-2 Theory Initiative. Experimental limitations in that data, she said, can make this prediction less precise.

A newer technique called a lattice calculation, which uses supercomputers to model the universe as a four-dimensional grid of space-time points, has also emerged. This method does not make use of data at all, Dr. El-Khadra said. There’s just one problem: It generates a g-2 prediction that differs from the traditional approach.

“No one knows why these two are different,” Dr. Keshavarzi said. “They should be exactly the same.”

Compared with the traditional prediction, the latest g-2 measurement has a discrepancy of over 5-sigma, which corresponds to a one in 3.5 million chance that the result is a fluke, Dr. Keshavarzi said, adding that this degree of certainty was beyond the level needed to claim a discovery. (That’s an improvement from their 4.2-sigma result in 2021, and a 3.7-sigma measurement done at Brookhaven National Laboratory near the turn of the century.)

But when they compared it with the lattice prediction, Dr. Keshavarzi said, there was no discrepancy at all.

Rarely in physics does an experiment surpass the theory, but this is one of those times, Dr. Pitts said. “The attention is on the theoretical community,” he added. “The limelight is now on them.”

Dr. Binney said, “We are on the edge of our seats to see how this theory discussion pans out.” Physicists expect to better understand the g-2 prediction by 2025.

Gordan Krnjaic, a theoretical particle physicist at Fermilab, noted that if the experimental disagreement with theory persisted, it would be “the first smoking-gun laboratory evidence of new physics,” he said. “And it might well be the first time that we’ve broken the Standard Model.”

Any one have more insight into this? Real clue to something new or hyped result?

Some of the articles are being pretty clickbaity about it:

Yes, a fifth force is one possible explanation, but this is nowhere close to saying scientists are closing in on it.

I’d be interested in possible explanations for why the lattice calculation differs from the e+e calculation. The new Fermilab measurements agree with the lattice calculation, but not e+e. Just coincidence, or is it for some reason better than the e+e method? I haven’t seen anything on this yet.

Heh. I remember the 5th force pre-internet clickbaiting in the 1980s

https://www.nytimes.com/1986/10/17/us/physicists-challenge-theory-of-a-fifth-force-beyond-gravity.html

For better or worse, the Standard Model has proven to be an incredibly resilient description of the universe. No one likes this situation because the SM doesn’t incorporate gravity, or dark matter, and because it seems a bit too messy to be the “true” description of the universe. And yet no one has yet found an unambiguous violation of it. Lots of possibilities over many decades, but nothing that’s persisted. And all of the experimental gaps have been filled in, with no surprises (the Higgs boson was the last one).

Physicists hope that they’ll find some small violation that will give them a crack to wedge open and unlock a new description. And it seems like it must, since there are still those open problems. It just hasn’t happened yet.

Eh, there have been changes to the Standard Model (especially due to various advances in neutrino physics). For instance, lepton number might not actually be conserved, or indeed even be well-defined, if neutrinos are Majorana particles (which I think is where the current weight of the evidence lies). And even if they’re not, neutrino oscillation has definitively proven that the lepton flavors aren’t individually conserved.

Meanwhile, our understanding of the Strong Interaction is vague at best, and so any sufficiently-sensitive experiment that probes the Strong is going to result in new physics. It can’t just confirm what we already expected, because we don’t know what to expect.

True; finding that neutrinos were not massless was a pretty big tweak. Still, it didn’t seem to change anything fundamental, in the way something like supersymmetry would have.

I agree that lots of stuff is poorly understood. But it might well be that it’s essentially impossible to probe further. The usual assumption is that the SM is a low-energy approximation of some other model, where spontaneous symmetry breaking gave us the various particle masses and such. But what if it ends up being valid to energies where we’d need an accelerator the size of the universe to probe? We’ll never get the answers. And so far we have very little evidence that’s not the case.

We might not be able to probe the True High-Energy Regime, at least, not with any foreseeable technology. But there’s still plenty of room for probing, in things like figuring out the details of the low-energy version of the Strong Interaction. Not only are there plenty more experiments we can do, there are plenty of experiments we’ve already done that we can’t even fully interpret. I mean, a few hundred isotope masses ought to be plenty enough data points for fitting a sensible model to, but we can’t even accomplish that.

And while the True High-Energy Regime might be entirely out of our reach, I wouldn’t say that the evidence leans that way. We can measure how the coupling constants change with energy for the energies we can probe, and extrapolate those changes out. The “Grand Unification Energy” you find that way is plenty large, 4.5 orders of magnitude larger than even the Oh My God particle, but it’s still three orders of magnitude below the Planck energy. That’d almost certainly be accessible to a Type II civilization, maybe even a Type III (which, granted, we aren’t and won’t be for a while). Certainly far short of “an accelerator the size of the Universe”.

Not with that attitude.

Here is an article in Nature News that discusses these results, as well as some other recent results.

Not very encouraging in the “new physics” realm:

Dreams of new physics fade with latest muon magnetism result (nature.com)

The muon’s magnetism is still strong. Its most precise measurement yet is in line with a series of earlier results — and seals an embarrassing discrepancy with decades of theoretical calculations that had predicted a slightly weaker magnetism for the elementary particle.

But although the odd behaviour of the muon — a heavier cousin of the electron — was once seen as a possible omen of new physics, results in the past two years suggest that the theory side might not need major amendments after all.

Thanks. Am I corrct in thinking that the thinking is that the traditional method took a shortcut by using experimental results in intermediate calculations, and the lattice method starts at ground zero and calculates up from there, and that therefore it is suspected that the discrepancy is due to some of the previous experimental results weren’t as good as thought?

My reading of this is that yeah, the theorists have gone back to their calculations and there’s a growing consensus that the lattice approach is more accurate. Plus, there is a separate experiment that isn’t the same as the Fermilab experiment that supports the new value.

The trouble is that GUTs tend to predict lots of things that haven’t been observed, like magnetic monopoles, proton decay, topological defects, and other things (it varies by theory). So it’s unclear if they have anything to do with reality. Sure, we haven’t necessarily fully tested the bounds here, but so far it’s mostly a bust. It’s possible that if unification exists, it only happens as a TOE that incorporates gravity, which probably does have to reach Planck scale. Ok, maybe “universe” is a bit of hyperbole, but “galaxy” isn’t that much better.

Predicting all the isotope masses sounds fine, but there’s not much reason to believe it’ll tell us much. It may not even lead to any additional intuition if it all comes from simulation.