If neutrinos have mass...

You need a TARDIS.

I used to wonder why stationary/slow neutrinos are not considered to be a candidate for dark matter. It made sense to me since they have mass and do not interact with other stuff much.

I asked the question (possibly on this board) and the answer I received is that they would have a minimum energy imparted by the cosmic background radiation via very infrequent but non zero interactions. This minimum energy would be enough for the neutrinos to move too fast to clump together the way we think dark matter does.

Have I got that right?

You are right that their non-viability as the main source of dark matter has to do with their low mass and (thus) high mobility. Neutrinos were relativistic (fast) for much of the early universe’s evolution, and if all the dark matter was like that, structure formation (filaments, galaxy clusters, etc.) would look very different since the neutrinos would be fighting against that clustering.

However, their energy in this story isn’t due to interactions with background radiation. In fact, neutrinos don’t interact with the background radiation at all* since photons only interact with charged particles. Instead, the neutrinos in the very early universe were in thermal equilibrium with all the matter swimming around, up until they weren’t anymore – about one second into the timeline, when the universe had cooled and sparsified enough. After that, the neutrino interaction rate fell quickly towards zero and they can be treated (for structure formation purposes from then on) as interacting only gravitationally.

Today’s observations of many different cosmological features all point to needing dark matter to be almost entirely “cold” dark matter, where the “cold” means heavy enough to become non-relativistic early enough in the universe’s evolution.


*to exceedingly good approximation

There have been many articles on this recently. Are “dark photons” what you are talking about?

Neutrinos are 5 to 6 orders of magnitude lighter than electrons, and can’t be cold dark matter. If dark photons are 20 orders of magnitude lighter, I don’t see how they could be either by many more orders of magnitude

Would that it were so simple.

I wasn’t directly talking about dark photons. The “on paper” simplest scenario is where dark matter is a boring ol’ fermion. “Fermion” here means that it’s a particle that obeys certain quantum mechanical rules like all our other bulk matter particles do. Electrons, quarks, neutrinos, and also composite particles like neutrons and protons are all fermions. You could imagine adding a new fermion that’s sort of like the neutrino but heavier, and that could be dark matter.

Such a dark matter scenario was one of the historical starting points in the dark matter quest. The “weakly interacting massive particle” (WIMP) was the prototype here for a long time. And, in this scenario, the mass of the particle does indeed provide a sliding intuition as to whether it would be “cold” or “hot” dark matter, and thus whether it is viable when confronting cosmological structure data. And neutrinos are too light.

More generally, this is a type of “thermal relic” dark matter. This means that the dark matter particle was in thermal equilibrium with the universe early on, then eventually decoupled due to the universe’s cooling and expansion, and is now a relic from that time.

You don’t have complete freedom within the thermal relic picture, though. For instance, as you consider lower and lower masses (but still plenty heavy enough to be considered cold dark matter), you will eventually end up calculating too much dark matter left over in the present day. To fix this, you need dark matter to interact (annihilate) more readily with itself, even it it still doesn’t interact much at all with “standard” matter. To make that happen, you can introduce dark-matter-only interactions. And the simplest type of interaction is the sort of thing that photons do. That is, maybe dark matter particles could have a dark charge and there could be dark photons mediating interactions between these (darkly) charged particles. This is actually a very clean and tidy addition to the overall picture, from a mathematical point of view. And, these dark photons could have mass, and they could be searched for independently of the dark matter itself.

But, if you introduce dark photons, you could further ask whether those particles could themselves make up the bulk of the dark matter?

They could in principle under their own thermal relic constraints, but it also opens up a new avenue entirely. These particles are bosons (as opposed to fermions). The everyday regular photon is another example of a boson. Another very differently motivated particle called the “axion” is also a boson and also a top dark matter candidate. For these particles (dark photon or axion), completely different mechanisms in the earliest moments of the universe are available to produce these particles and keep them out of thermal equilibrium with everything else. Thus, mass scales relevant for “thermal relic” dark matter are irrelevant here. And the bosonic production mechanisms can lead to a population of particles with near zero kinetic energy. Thus, even if they have small mass, they can be non-relativistic during structure formation.

That solved that problem, at the expense of introducing the right-handed neutrino problem: if neutrinos go slower than light, then in principle you could “catch up” to a neutrino from behind and thus its spin would appear to be right-handed.

Which is, of course, not a problem for anything at all.

Wrong-handed neutrinos aren’t a “problem”, per se, but they are relevant. It gets to the heart of the Majorana vs. Dirac distinction. In the Dirac model, neutrinos (and antineutrinos) have both handedness and lepton number, and if they somehow get mismatched, then you end up with a particle that’s non-interacting even by neutrino standards. In the Majorana model, however, there is no lepton number, and the handedness is the only thing distinguishing what we call a neutrino from an antineutrino.

It’s tough to tell directly, since all known processes that produce neutrinos do so at energies so much greater than the neutrino’s mass that “catching up” is, while theoretically possible, for practical purposes impossible. But there are a couple of ways around that. One is with virtual neutrinos, far off their “mass shell” (i.e., behaving as if their mass were very different but getting away with it because they’re very quickly re-absorbed), which (in the Majorana case) could lead to double beta decays with no neutrinos emitted (some groups have claimed to observe this, but the claims are disputed). Another place where it could matter is in Hawking radiation: While the Weak Interaction cares about the handedness of leptons, gravity doesn’t, and so, in the Dirac case, a black hole could emit twice as many neutrinos (neutrinos and antineutrinos, both in right- and left-handed varieties). So Dirac neutrinos would lead to shorter black hole lifespans.

So, 1067.9 years instead of 1068 years? Let me get my stopwatch…

A technical point that in no way takes away from your excellent post but I hope will be of interest all the same –

Although the neutrinos are virtual and off mass shell, that virtual-ness and off-shell-ness is unrelated to the likelihood of a virtual spin flip. The process’s decay lifetime indeed suffers the full penalty of the small neutrino mass all the same, scaling to leading order as 1/m2 despite the neutrino being quite off shell. (More technically still, the spin-flipping Majorana propagator for the virtual neutrino goes as m/(q2-m2), and for large four-momentum transfer q – which is fixed here by the external kinematics of the process – the mass-shell-defining mass m in the denominator is irrelevant but the mass still appears in the numerator in its full annoying chirality-induced glory.)

Does that mean that virtual particles do not necessarily have to have the properties of the real particles, for instance could a virtual electron have only half the mass of a real electron and “get away with it because it is very quickly re-absorbed”? If so, would that apply to charge too, with a hypothetical charge-less electron, or with spin not equal to 1/2 or -1/2, say 1/3 or -1? Or could this only happen with neutrinos?
I guess that this would be theoretical in the sense that it cannot be directly observed, right?

To within the sloppiness of language inherent in trying to describe these processes in English instead of the math of quantum field theory, yes. Slightly more precise would be to say that you can have processes modeled by virtual electrons, even when the available energy in the process is less than the mass of the electron (though those processes generally get a lot more efficient when the energy is higher than the mass).

To the best of my knowledge, you can’t be “off charge shell” or “off spin shell”.

Yeah, I don’t think charge has a Heisenberg conjugate. Not sure about spin. In general, one “borrows” a quantity against its conjugate. So, mass-energy can be borrowed against time because in a short time interval, the energy must have a correspondingly large uncertainty.

While it’s true that energy and time are conjugate, note that this isn’t why virtual particles do their thing. The description of virtual particles as being allowed due to the Heisenberg uncertainty principle always makes me think of the missing dollar riddle, where a bit of misdirection and rearrangement of the facts allows for a seemingly sensible result. It’s a surprisingly pervasive description.

The shell game being played starts when one says, “Virtual particles live for a short amount of time, so their energy can be whatever,” and while there are problems already, the big sleight of hand is the subsequent leap to “…so their mass can be whatever.” Mass isn’t conjugate to time, energy is! Why are we suddenly justifying a change in mass now when we started with the energy-time uncertainty principle?

In truth, for virtual particles, energy is 100% conserved. Momentum is 100% conserved. In the most common application, the energy and momentum of a virtual particle are also 100% known due to the constraints of the situation. There is no uncertainty being invoked. Instead…

A real particle moving freely through space has an energy E, a momentum p, and a mass m. The relationship E2-p2=m2 holds for these particles*. For virtual particles, this relationship is irrelevant. That’s all that is happening.

When the goal of the math is to calculate how particles interact with one another, it is natural that some of the mathematical bookkeeping will have echoes of particle-ness. Virtual particles are a piece of the bookkeeping, but nothing says that E2-p2=m2 has to hold for this piece of bookkeeping. And it doesn’t, full stop.

Because virtual particles are immensely helpful as a tool and also have echoes of particle-ness (e.g., when drawn in Feynman diagrams), there is a tendency to “explain” them through compelling particle analogies.

But the uncertainty principle thing is an unhelpful pedagogical sleight**, and it’s much simpler to say directly that virtual particles aren’t real particles, and they don’t have to obey the energy-momentum relationship. (I mean, why would they?)


*I’ve used the common particle physics units convention of setting c=1. Makes the math much tidier. If you prefer, though: E2-(pc)2=(mc2)2.

**To roughly quote David Griffiths: If someone invokes the uncertainty principle to explain something, keep your hand on your wallet!

A virtual particle’s contribution to a process is most efficient when on mass shell. Being off-mass shell, in either direction, is bad and leads to a lower rate. This is in a very real sense a resonance effect. Trying to “ring” the (virtual) particle’s quantum field far from where it wants to ring (i.e., far from E2-p2=m2) leads to a reduction in amplitude, while ringing it close to mass shell leads to a resonant enhancement.

This is nicely seen, for example, in the electron-position interaction cross section plotted versus center of mass energy. One version is here, showing a stark enhancement in the e+e-–>hadrons rate at E=MZ (mass of the Z boson, around 90 GeV/c2).

Yes, and “borrowing” against uncertainty is as well. Different tools for different approaches.

Energy is always conserved. But, we don’t know what that value of energy is with perfect precision until after an infinite amount of time. That lack of constraint on energy is the uncertainty. The energy of a particle with a finite lifetime cannot be known with better than a certain precision.

Real particles are virtual particles whose boundaries are outside our experiment. :wink:

I’d go one step further: Technically, no real particle is ever detected, only virtual particles. Because if it interacted with the detector, it didn’t propagate to infinity.

I would claim that it isn’t actually a tool in this case. Invocations of the idea in books and articles fall into three categories. The first is where it isn’t mentioned at all. The second is that energy borrowing, etc., is introduced as an intuitive hook to hang one’s hat on (which I don’t mind) but always with hedging language accompany it (e.g., “This isn’t actually what happens, but…”; “Don’t take this analogy too seriously because… .”) The third is where ontological implications are given quite directly. In the world of contagious ideas, authors of the third type can spawn from the authors of the second type by deciding the hedging language isn’t needed, or by crafting statements that don’t say anything incorrect (while implying a lot more).

I’m happiest with the first approach, and can sleep at night just fine with the second.

I’d say quite the opposite: if the particle that interacts in the detector and is examined and measured isn’t the thing we want to call a “real particle”, then we have different goals with the terminology.

To be sure, a real particle is a quantum mechanical thing – an excitation of a quantum field – and if it doesn’t propagate to infinity then it has (among other things) an uncertain energy at some level. But having an uncertain energy doesn’t relegate it to virtual land.

Some issues with the virtual particle is a real thing that borrows energy (etc.) story, in no particular order and with varying degrees of squeamishness:

(1) Energy, momentum, and angular momentum conservation come from fundamental symmetries baked into theory. Nothing in non-relativistic or relativistic QM says you can violate these conservation laws, and nothing requires that you do. When energy uncertainty is relevant in a QM situation, it is always through entanglement with another piece of the system which will also be uncertain but in such a way that the system energy is conserved. There is also no actual calculational method when dealing with scattering processes that operates via energy non-conservation. You can’t actually do any practical scattering calculation with the story that energy gets borrowed and then given back. Using the story as a metaphor to help think about the various moving parts of the theory is one thing, but looking for an equivalence to the theory fails.

(2) The ontological picture given is that, when two particles interact, a virtual mediator is emitted from one and absorbed by the other, and in the limit of this happening very quickly, the energy can be essentially anything. This leads to immediate problems.

The first is that it treats the mediator “as if” it is an independent entity travelling for a time (that may be taken toward zero), but that would require a certain set of quantum numbers and finite energy and momentum to be transferred via the mediator from particle A to particle B. This approach already means you can’t have attractive forces. Nothing in the energy-time uncertainty relation would allow that mediator to carry a “pull”.

Another problem is that this picture would require considering mediators travelling the other way, from particle B to particle A. There’s no reason to pick one over the other. But they have different causal implications that have to be removed in a discontinuous way when taking the limit \Delta t\rightarrow 0. (Wait, inline LaTeX works? Awesome.) Or to say simply: This approach suggests that more valid diagrams exist than actually do exist.

Another discontinuity is that talking about meta-processes involving intermediate real particles affects how competing diagrams are treated. You either need to add amplitudes and then square, or square amplitudes and then add, and one is not the limiting case of the other.

The above issues all stem from the metaphor’s implication that there is a well-defined temporal or causal aspect to some intermediate state when there isn’t one.

The above issues are fully resolved by noting that the original interacting particles (fields) are already in contact and experiencing each other’s presence and/or the presence of other fields, and the state change happens through that “point-like” interaction. It’s only through a specific calculational approach that any substructure might seem to emerge, and that substructure carries some smells of particles, but that smell is not enough to make it be particles.

(3) Virtual particles are not inherently quantum. One can (but usually doesn’t) solve purely classical interacting field theories using a perturbative approach, and Feynman-like diagrams complete with internal legs can encode those calculations just like in quantum field theories. It’s clear in that case that you can’t explain the virtual particle’s existence through a principle of quantum uncertainty.

(4) Virtual particles are not required in quantum field theories. You can calculate scattering processes through any number of non-perturbative methods, and no concept of virtual particles ever enters the picture then. So, they aren’t an essential part of our current understanding of the universe. They are just an essential part of one particular calculational approach.

(5) A subtle but substantial problem is that virtual photons (like any virtual particle) can have any mass, including a finite mass. Suddenly photons can have three polarization states, not two; they no longer travel at c; and they are in direct conflict with gauge invariance. If we try to map these characteristics onto real photons, we’re going to have our work cut out for us. Fortunately, real photons don’t carry those complications. A real photon – even one with an uncertain energy – is still massless.

(6) A quite stark problem is that diagrams containing loops allow all internal lines to take all possible values for energy and momentum.* First, notice that momentum is freely varying but never gets any love in the “uncertainty” story. But the main point: one leg of a loop here can have any energy and momentum, and another leg of that loop there can have any energy and momentum – but together they must respect energy and momentum conservation. One cannot let each internal line go on its own joy ride.

The metaphor has no problem with each virtual particle doing its own thing, arguing that it lives for a short time (approaching zero time via some undefined limit) and is thus free to violate energy conservation as it wishes. But this is simply not how it works. The various energies and momenta all taking on an infinite range of values must do so in a manner that obeys the conservation laws throughout the diagram. One cannot really say that energy conservation can be violated but also require that, while it’s being violated, it’s being violated in a way that respects conservation.

Anyway, that’s a big core dump of stuff. Bottom line: it’s a common metaphor, but one that doesn’t have very solid grounding and, for my money, one that’s hard to do anything with without stretching beyond validity.


*Actually, tree-level (no-loop) diagrams also allow this, but energy and momentum conservation means that only one set of values for energy and momentum are allowed for each virtual particle anyway.

Yes, but you can’t quote or copy it. Weird, is it not?