Can someone explain this front page NYT article on a CERN particle-that-wasn't?

See subject: The Particle That Wasn’t - The New York Times

One of the most maddening and representative maddening articles imaginable.

Nice (for newspaper) on methods and idea of experiments in physics. Incomprehensible on exact topic.

This " ‘bump(let)’ in gamma rays" was thought to herald something, a particle, as NYT-worthy as the Higgs…but they figure that’s all we need to know.

So what’s up?

Many theories at the leading edge of physics make predictions for new subatomic particles with specific properties. Unlike the subatomic particles that most people are familiar with - protons, neutrons and electrons - these particles are not “stable”. They decay into other particles in fractions of a second. So, you can’t look for these particles in ordinary life or ordinary matter, you have to build special machines in order to produce them and then gather evidence that they exist, before they decay.

A particle accelerator like the Large Hadron Collider accelerates ordinary particles (protons, in the case of the LHC) to very high velocities, and then collides them head on. The collision results in a shower of debris as new particles are produced directly from the kinetic energy of the colliding particles, according to E=mc^2.

An equivalent form of the same equation is m = E/c^2, and this says that the mass of the particles that can be produced scales linearly with the energy in the collision. This means that particle accelerators at low energies will only produce particles with low mass. Humanity has already fairly thoroughly explored the particles that have been produced in accelerators with energies less than approximately 1 TeV.

The LHC is the first accelerator we’ve built with energies above 1-2 TeV (max design energy for the LHC is 13 TeV I think), and so people are looking to see whether any new particles are appearing at these new, high energies. These results will provide experimental evidence to confirm or refute various competing theories attempting to resolve unanswered questions about particle physics, general relativity, cosmology, etc.

However, actually looking for these particles is fairly subtle, for two reasons. The first is that these particles interact with normal matter only very weakly (which means they are very difficult to detect in the first place), and usually decay much faster than they could be detected even if this were not the case. So, particle accelerators involve large detectors that detect essentially just the particle decay products that we are able to easily see. Physicits calculate what spectrum and properties of these decay products would result from the theoretical particles they’re looking for, and then determine if this spectrum matches what they actually see in the experiment. So, detection of the actual particles they’re looking for is very indirect.

The second reason is that particle physics and quantum dynamics is probabilistic. There is no way to calculate whether a specific event (e.g. the production of a particle) will definitely happen, you can only calculate the probability that it will happen. And, the probability of production of most particles that physicists are now looking for is very small. This means that you have to perform a lot of collisions in order to get a statistically meaningful result that says you either have produced, or have not produced, a specific particle. And you can never be absolutely sure that your conclusion is valid - you can only state that the chance you have made an error is very small.

If you have ever studied probability, you will know that when drawing conclusions, your chance of making an error is higher when you have a smaller number of samples. If you are trying to determine if a coin is biased, 4 flips out of 6 that land on heads is not very good evidence, but 40 flips out of 60 is much better, and 4000 flips out of 6000 is quite convincing.

This is essentially what happens at the LHC occasionally - they perform a few trillion collsisions, analyze their data, and they see a slight statistical “bump” in some data signal from the detector that is consistent with some new particle that someone has predicted. They get excited about this result, but they know that they have not performed enough collisions to state with certainty that it is real - it might still just be a statistical fluke. When they perform additional collisions, the “bump” either disappears (because it was just a statistical fluke), or is confirmed (because it is a real discovery, e.g. the Higgs boson).

The standard for “getting excited” about a result on the LHC is a certainty of 3 sigma, which basically means there is 1 chance in 300 that the result is just a statistical fluke. The standard for announcing a confirmed result (the actual discovery of a particle) is 5 sigma, which corresponds to roughly 1 chance in 3.5 million. Given the number of different experiments going on at the LHC, it is fairly expected that they will have a few 3-sigma false alarms during the life of the project.

In brief: With the discovery of the Higgs particle, every particle currently predicted by well-established theories has been found; there are a number of hypotheses for new physics out there, and it is hoped/expected that CERN might find evidence that supports one or more of these hypotheses. Earlier this year, there was some evidence that a new particle had been found - this evidence was the appearance of a bump - a greater number of gamma rays appearing in a particular certain energy range than expected. Such a “bump” is evidence that an unknown particle had been created, and had then decayed into gamma rays (since a particle would have a particular mass, it would decay into gamma rays with energy equivalent to that mass), but with the collection of more data, the “bump” went away, indicating that the bump was caused by chance, and not by the existence of a new particle. (or that’s my take on it anyway…)

ETA: Looks like I was ninja’ed by Absolute who has a more complete answer…

It wasn’t even 3 sigma. The article (which I had just read prior to signing on to TSD) said 1 chance in 93. That must be 2 sigma. Too bad.

And while we’re at it, a new particle other than the Higgs, if it really were detected, would be even bigger news than the Higgs. Finding what we expect is great and all, but finding what we don’t expect is what really gets scientists excited.

It was said before the LHC fired up that the most boring thing it could possibly find would be the Higgs and nothing else. Unfortunately, that’s all we have so far.

By the way, if you are interested in learning more about this, here is an excellent Fermilab public lecture by Sean Carroll of Caltech on the Higgs boson, LHC, and particle physics in general. In particular, it contains probably the best description of why colliding two particles at high velocities can produce entirely new particles.

(first 19 seconds of video are silent for some reason)

Aside: 1 in 93 is around 2.6 sigma. 2 sigma is roughly 1 in 20.

The actual quantitative significance of the earlier result depends on if you look at each experiment independently, whether you assume you know “where” to look (and for “what”), or whether you assume something new could appear anywhere and could look like anything. All told, the broader point holds that it was just a hint and that that hint has faded in short order.

Exactly. The exciting thing about the prospective detection was that it wasn’t something anybody had on the cards—and thus, it would’ve been something offering potential guidance on how to construct new theories going beyond those we currently have. (However, it seems it wasn’t terribly hard to write down models accompanying the hypothetical new particle—at last count, there were something like 540 preprints concerning the bump submitted to arXiv.)

Also, what was particularly intriguing was that the bump appeared not just in one of the experiments, but that both the ATLAS and CMS detectors showed an excess at the same mass-energy, which, since both experiments are uncorrelated, significantly boosted the odds for a true detection (however, coming up with the actual probabilities here is a challenging matter, one factor being that a lot of experiments are carried out simultaneously).

But in the end, it turned out to all be just a fluke.

For that matter, when they thought they had detected faster-than-light neutrinos a couple of years back, there were hundreds of papers attempting to explain that result, too, even though everyone was almost certain (correctly, it turned out) that it was just an experimental error.