Ask the particle physicist

Almost without exception textbooks make the statement that “exchange forces are not really forces, but instead are geometric phenomenon.”

And it’s true that you can draw diagrams that show the probability waves of fermions canceling out as they approach each other’s position, and constructively interfering as they get further apart.

But this “non-force” stabilizes the mind-boggling, crushing gravity of neutron stars for God’s sake!

It’s clear that as fermions are smashed together in stellar objects their frequency must increase and therefore so must their energy.

But why, specifically, must this interaction not be considered a force? Is it just that in QM everything is Hamiltonians or Lagrangians?

Nobody seems to have a problem with electromagnetic force or strong force etc.

Well, crap then. We are wasting our time and money with the LHC and I am not going pay my tax dollars for it anymore…oh; wait a minute; I think my tax money went for the SSC, which never seemed to get out of the ground. (And I was lobbying for the SSC to be built here where I live next to Fermi, which would have made it more convenient for me to kibbutz w/ my buddies over there.)

OK, well then I don’t see why the Higgs boson is such a big deal. While it may help finish off the Standard Model’s suite of force-carrying particles, it’s not gonna change any paradigms and it’s not gonna add weight to the Standard Model. The Standard Model is already nicely predictive and robust, and if we find the HB, we are just going to say, “See; I told you the Standard Model was terrific.” If we don’t find it, we’ll just say, “Well, we didn’t find it, but since the Standard Model is so terrifical otherwise, the HB is probably still out there; we jus’ dint find it.”

And surely the first thing people are going to wonder about the HB is what it consists of…and what’s deeper than the Higgs condensate.

On a serious note, it seems to me the Standard Model is a lovely paradigm for mathematical accounting in very complex formulas for what behaves how, but it’s not very satisfying as a fundamental explanation and never will be, until we figure out what space itself is, and how particles and energy relate to it.

Modern physics was so quick to blow off the aether and so pleased with their experiments to show it did not exist, but I, for one ignoramus, am underwhelmed. I fail to see how a Higgs condensate containing particles that affect–even define–everything which zips around in it is not a particulate or mechanical aether, and it seems to me the Higgs boson would bring us back to Newton, conceptually, even if the terms and specifics themselves vary.

I find it deeply dissatisfying to think of particles moving through space, and much more satisfying to think of a paradigm where particles are space, moving. Such a model would solve both duality and entanglement much more elegantly (in my ignorant opinion), as well as this little dilemma where stuff just shows up de novo and also disappears. Of course it does if stuff is just another behaviour of space itself.

But carry on with particle physics for now, by all means. It is so spectacularly cool and I pretty much worship the ground you guys walk on.

By what mechanism do virtual particles pop into and out of existence?

Do they break the law of conservation of energy?

How close are we to understanding what gravity, space and time really are?

Any guess as to what dark energy really is?

Can the energy from gamma rays and other things hitting the upper atmosphere be harvested as an energy source?

Thanks!

About those ultra-high energy cosmic rays that have been observed: I presume that what was actually observed was a flash of light that was interpreted as a collision of an ultra-high energy particle. Can you explain why the interpretation is held with such confidence, especially when it leads to a such an unlikely conclusion (cosmic rays with absurdly high energies)? How was every other possible source of a flash of light ruled out?

I heard they have been discovered experimentally. Any thoughts on this? I understand it’s outside your speciality.

Sorry for the short absence. I had guests in town the last few days…

Almost without exception textbooks make the statement that “exchange forces are not really forces, but instead are geometric phenomenon.” And it’s true that you can draw diagrams that show the probability waves of fermions canceling out as they approach each other’s position, and constructively interfering as they get further apart. But this “non-force” stabilizes the mind-boggling, crushing gravity of neutron stars for God’s sake! It’s clear that as fermions are smashed together in stellar objects their frequency must increase and therefore so must their energy. But why, specifically, must this interaction not be considered a force? Is it just that in QM everything is Hamiltonians or Lagrangians?
Can you call the repulsion that prevents two fermions from entering the same state a force? I suppose so, to the extent that its useful. In a spatial scenario – say, where the fermion states differ only in the spatial part of their wavefunctions – you could write down a potential that describes the energy cost of moving one fermion relative to the other, and if you do that, you could take the gradient of that potential and call it a force. But the exclusion principle is more general than that. You could have two identical fermions with identical spatial wavefunctions (perhaps trapped in a potential well), with the only difference in their states being their spin. An attempt to flip one of the spins while keeping the spatial wavefunctions the same will fail. Is this a force? It’s a bit harder to cast it as one. Another: A low-energy electron striking a proton can create a neutron (plus a neutrino). Now assume the target proton is part of a large nucleus. This same low-energy electron may be unable to convert the proton to a neutron since all available neutron states are already taken up by the nucleus’s spectator neutrons. Thus, the interaction cross section goes to zero.

The easiest description of all this is just what it is: states that are not allowed are, well, not allowed. Transitions into those states don’t exist.

One could prevent transitions into disallowed states by invoking forces, and that’s how you would need to do it in a classical picture. But the non-existence of these states (and transitions into them) is a fundamental consequence of relativistic quantum mechanics for particles with half-integer spin. You can’t not have it.

So, on a macroscopic or classical level, the exclusion principle is a force, full stop. On the quantum level, the exclusion principle is just the general statement that not every system state you can write down actually exists.

OK, well then I don’t see why the Higgs boson is such a big deal. While it may help finish off the Standard Model’s suite of force-carrying particles, it’s not gonna change any paradigms and it’s not gonna add weight to the Standard Model.
You’re not too far off here. If all that comes from the LHC is the discovery of a single Higgs with expected decay modes, then certain Standard Model calculations are improved (which helps the search for new physics), but people will still be generally disappointed. If the Higgs is not seen, that is more interesting (as it should be seen by the time the LHC reaches its maximum energy, so if it isn’t, then something else is going on). But, the real prize will be anything not in line with Standard Model predictions, be it something explicit like a candidate supersymmetric particle or something inferred like decay rates that reveals some new underlying physics.

Relatedly…
I find it deeply dissatisfying to think of particles moving through space, and much more satisfying to think of a paradigm where particles are space, moving. Such a model would solve both duality and entanglement much more elegantly (in my ignorant opinion), as well as this little dilemma where stuff just shows up de novo and also disappears. Of course it does if stuff is just another behaviour of space itself.
The questions you outline are the goals of particle physics. The fact that we work with (and study the properties of) particles is just because our hands are tied. Humans can only probe nature with the probes she gives us, and fundamental particles plus their interactions are the best probes we’ve got. Nobody particularly cares if the Higgs mass is 213 GeV or 227 GeV. But, knowing it will help put together the picture of what the hell is going on under the hood. To ask “what is space?” is to ask “what can we learn about ‘what is space?’ via the particles that interact in it?” And if particles and space are not distinct (as your heart desires), our description of them as distinct will eventually break down if we keep pushing on it hard enough. The particular way in which it breaks down will guide the evolution of our description of nature, and if the new-and-improved description ties particles and space together in some way, so be it. If not, so be it.
By what mechanism do virtual particles pop into and out of existence? Do they break the law of conservation of energy?
They do not break conservation of energy because they are not around long enough to break it. More rigorously, the energy of a quantum system (like a pair of virtual particles coming and going again) is only well-defined if the system stays around for long enough. Some semblence of balance is maintained by the fact that the longer the virtual particles exist, the less apparent energy they can have.
How close are we to understanding what gravity, space and time really are?
We will never know what they “really” are. The best we can do is come up with a description that matches all observations. When observations go against our description, we can say for sure that the description is wrong and needs to be fixed or replaced. But when all our observations to-date agree with our description of nature, all we can say is is that, well, they agree.
Any guess as to what dark energy really is?
Not a clue. It appears consistent with a vacuum energy density that remains constant and non-zero in the absence of matter, but (in line with the above paragraph), that doesn’t actually mean it is a vacuum energy density.
Can the energy from gamma rays and other things hitting the upper atmosphere be harvested as an energy source?
A back-of-the-envelope calculation: if you take a 50 m cube of water (that’s 125,000 metric tons of water), it will be absorbing cosmic ray energy at about 0.02 watts. Not terribly useful, unfortunately!
About those ultra-high energy cosmic rays that have been observed: I presume that what was actually observed was a flash of light that was interpreted as a collision of an ultra-high energy particle. Can you explain why the interpretation is held with such confidence, especially when it leads to a such an unlikely conclusion (cosmic rays with absurdly high energies)? How was every other possible source of a flash of light ruled out?
The signal is basically flashes of light, but there is a lot of information in it. (a) The total duration from start-of-signals to end-of-signals is only hundreds of nanoseconds, which ensures that only one event is occurring. (b) The products of a particle collision will travel through the detector materials (typically either the atmosphere or a chunk of continental ice) in correlated ways. That is, the outgoing particles will leave lots of trails of light stemming from a common location (the interaction point) and heading in the same general direction. © The total distance a “daughter” particle travels is governed by how much energy it has. The longer the trail it leaves, the higher its energy had to be. (d) The light produced by particles traversing the detector medium is directly related to the amount of energy they leave behind. By adding up all the light you see, you can determine the total energy deposited by the daughter particles.

So, it’s a combination of appropriate spatial patterns, temporal patterns, and total light production that lead to the inference that a single very-high-energy particle has induced a given set of signals.
I heard monopoles have been discovered experimentally. Any thoughts on this? I understand it’s outside your speciality.
After a short glance at the paper, I’m not sure yet if this is a consequence of the meta-material they are working with (spin glass) or not. They imply that they see evidence of measurable magnetic currents, which is pretty cool, but they are not (I think) claiming the presence of a fundamental magnetic monopole. I might take a look at the paper more closely if I get a chance. Perhaps someone in this thread is already familiar with the research.

I would like to thank you and Stranger and others for being patient with the ignorant and uneducable. I do hang on every word.

It occurs to me that the particle model has already broken down–more precisely, never was reasonable to begin with–because it is at absolute odds with entanglement (and duality in general, for that matter). Particles are local; entanglement is not and wave behaviour is not.

It feels to me that most of particle physics centers around the same paradigm Democritus thought up: an indivisible tiniest something (or somethings, maybe). The odds that the first guy puzzling over how the world is structured got it right do not seem good to me, although perhaps that’s a crappy reason to suspect the particle physics paradigm.

While I agree that further exploration using a particle paradigm and colliders is useful, I will be very surprised if it does not dead-end, and not just because PeV/ExV machines are so far away. More like we’re using fancier hammers to find out what color something is.

Anyway, thanks again for the replies.

This is a little off. When you say “particle model”, do you mean quantum mechanics and/or the Standard Model of particle physics? If so, these are not only compatible with entanglement and wave behavior, but these require entanglement and wave behavior. If by “particle model” you mean billiard-ball mechanics, then indeed such a model is very broken (known since the early 1900s.) But modern particle physics does not consider particles to be local, billiard-ball-like items.

Be sure to see post #52 (the third section in that post, about light and waves and particles).

Don’t let the word “particle” throw you. It’s the inertia of language that leads us to using that word for electrons and photons and whatnot. These entities are not actually described as (or consider to be) what the lay definition of “particle” would imply.

Which begs the question as to why the physics community hasn’t discarded use of the term “particle” in regard to elementary and composite subatomic constituents. Of course, physicists know what they mean by “particles” in that context, but it still makes both teaching to physics students and explaining to the lay audience very difficult in terms of getting past the everyday of particles as being little bits of stuff.

Stranger

I have an atavistic internal revulsion to such concepts as dark matter and dark energy. I cannot but believe that there’s something else more elegant going on that doesn’t require such speculative solutions to the observations they have been proposed to explain.

Not a question, I know, but what’s your reaction to that, and what is your thinking in those areas? And to the whole concept of what you might call invention for convenience.

Oooo - well put. Hard not to think of Einstein’s cosmological constant. What comes first, the idea or the data?

What do people mean when they say that Einstein’s equations “break down” at the quantum level? Did they do experiments, and the results were in conflict with what Einstein’s equations predict? Or do they just yield nonsensical answers, like E=mc[sup]applesauce[/sup]?

Oh - and Pasta: have you read any of the materials related to “Programming the Universe” (Amazon Link)? Basically models where the universe is comprised of information - or at least the information component of a given thing is treated as a vital attribute alongside its energy, location, momentum, etc.? I wonder how physicists regard that approach to understanding our reality…

Well there is that Planck’s Constant thing where the energy in question insists on coming in discrete packets. :stuck_out_tongue:

I have an atavistic internal revulsion to such concepts as dark matter and dark energy. I cannot but believe that there’s something else more elegant going on that doesn’t require such speculative solutions to the observations they have been proposed to explain. Not a question, I know, but what’s your reaction to that, and what is your thinking in those areas? And to the whole concept of what you might call invention for convenience.
Your feeling is sensible, and no one (save the popular press) would argue that we have anything really figured out about dark energy. (Dark matter is a little more sensible, in that there appears to be actual matter out there – we just don’t know what it is.)

Dark energy is not meant to be a solution. It is the problem. It has the name “dark energy” because, well, the problem needs a name, but it is really just a parametrization of our ignorance. We look at cosmological observables (most importantly the acceleration of the expansion of the universe), and we find that these require non-zero energy density throughout space, which is weird, albeit allowed by general relativity. The provenance of this energy is unknown, and it may very well be that the underlying physics is very different from what GR would imply (and that there is not actually non-zero energy density everywhere). But until we make some headway, “dark energy” is what we call this, um, situation.

I can assure you that “invention for convenience” does not survive in the physics community. Pop. sci. writers want to report answers, so things that are not meant to be answers tend to get presented as if they are. From the inside, we readily admit that we don’t have a clue about what the hell is going on most of the time. No, really! Progress looks like this:



Legend: 
  w = WTF?
  ! = insight!
  - = progress from insight

  --> time -->
wwwwwwwwwwwwww!--wwwwwwwwwwwwwwww!--wwwwwwwwwwwwwwwww!--wwwwww...



This pattern is fractal. It holds for decade-scale insights and for day-scale insights. This is why I like my job. I spend nearly all of my time problem solving. If a small problem cracks easily, it just means I can make rapid progress to the next problem, all in the context of tacking a larger, say, decade-scale problem.

What comes first, the idea or the data?
Not sure what you mean here…

What do people mean when they say that Einstein’s equations “break down” at the quantum level? Did they do experiments, and the results were in conflict with what Einstein’s equations predict? Or do they just yield nonsensical answers, like E=mc[sup]applesauce[/sup]?
It’s the latter. No experiments so far are inconsistent with general relativity or with the standard model (mostly). The problem is that GR and the SM are formulated in very different ways. GR takes a topological approach, with the presence of energy (perhaps in the form of mass) influencing the geometry of spacetime and with that same energy evolving according to that spacetime. The SM assumes a flat spacetime in which so-called fields (typically fairly localized and termed “particles”) come and go and interact. When you try to crowbar these pictures together, you indeed get some applesauce, either in the form of spacetime singularities that break field theory or in the form of infinities that pop out when you try to calculate anything related to gravity. Lots of folks are currently hacking at this problem of creating a unified theory that reduces to GR at large scales and reduces to the SM at quantum scales.

**Have you read any of the materials related to “Programming the Universe” (Amazon Link)? Basically models where the universe is comprised of information - or at least the information component of a given thing is treated as a vital attribute alongside its energy, location, momentum, etc.? I wonder how physicists regard that approach to understanding our reality… **
I am not familiar with these materials. It looks interesting, if a bit on the philosophical side. My guess is that unless he presents some ideas with testable predictions, I doubt most physicists will care much on a professional level about his musings. But, I haven’t read the book, so I can’t say for sure.

The wavefunction of an electron that is in the process of making a transition from an excited state to the ground state has a probability density that is an oscillatory function of time.

Since its charge distribution is proportional to its probability density this creates an oscillating electric dipole of frequency (E2-E1)/h.

Schrodinger, who despised Bohr’s quantum jumps said, “It is hardly necessary to point out how much more gratifying it would be to conceive of a quantum transition as an energy change from one vibrational mode to another than regard it as a jumping of electrons.”

My question is; how can an oscillating electric dipole, that persists for some period of time, create a discrete photon of the above frequency?

And thanks for your previous answer. My very limited imagination was incapable of envisioning two electrons in an electron trap having the exact same spatial wavefunction.

If I could have imagined the above, I might have seen the impossibility of one of them spin flipping.

When I say “particle model” I’m not talking about some sort of notion that particles are tiny billiard balls. I think I get the current paradigm that they are neither solid bits nor waves…in my simplified world I think of a particle as a discrete behaviour rather than a “bit” of something. And I think I recognize that in current modeling, the more local that behaviour is, the closer a billiard ball metaphor holds; the less local the behaviour, the closer a wave metaphor holds.

Two parts bug me:

  1. The discrete part. That is to say, there is a concept of an “individual” particle (even if the behaviour of that “particle” is not localized to the tiniest conceivable region of space). And all current modeling seems to be based on a system that is essentially an accounting paradigm where one of the main goals is find an indivisible–discrete–smallest component. It seems to me to be a bit circular to simply assert that current paradigms require entanglement and non-locality for particle behaviour as if that demand somehow means the modeling which promotes the demand is correct. While that’s true, there isn’t any gut-level beauty for non-locality and it remains spooky–i.e. more than just counter-inutitive. While you can say a discrete behaviour collapses all at once everywhere, it’s not really an explanation for what that particle is, exactly. If, for instance (and I’m playing fast and loose with language to make a point and not advance an alternate theory) a particle is space itself propagating a behaviour, as opposed to particles propagating in or through space, then trying to find out how the universe works by looking for indivisible behaviours won’t ever get us to a Theory of Everything.
  2. What is space? Right now, even there our paradigms want to make it some sort of soup composed of individual, but really tiny behaviours all squished together–conceptually an extension of the “particle” model. That’s the same paradigm writ even tinier, and that’s what I find dissatisfying: one more field of individual components so tiny they appear to be a single entity. What if space is absolutely smooth at a tiny enough level and what we call mass and energy are simply descriptions of how it behaves–i.e. neat accounting mechanisms to describe and predict in the same way taking ever smaller discrete slices of time for a pitched baseball describes and predicts its behaviour but doesn’t really get at what motion is?

There isn’t really an oscillating dipole that sits and resonates for some time. If there were, then your intuition would be correct that the system would emit radiation continuously at the transition frequency. The transition, though, is instantaneous, at least on the relevant time scale of h/(E2-E1). (And on any time scale we’ve ever probed. Atomic transitions are just a type of particle decay, and all particle decays we’ve probe are consistent with being instantaneous… You’re in state 1. Then bam!, you’re in state 2.)

What we have is a rather clean description that matches reality. In this description, particles take on a simple mathematical form, and this form also happens to imply a few spooky properties, which is good since these spooky properties are indeed observed. However, IMHO, these properties are spooky only because we have been trained since birth to have a “local” intuition. But, nature didn’t ask us. This isn’t to say that particles are what our description says. Indeed, we can never say what anything is; we can only construct descriptions that match whatever data we have and that are sufficiently useful for human use.

Ya’ lost me here, sorry! Space is currently taken to be smooth (except by some theorists exploring unified models), but your point seems to start with the opposite position…