Two Quantum Electrodynamics Questions

So, I’m trudging through a book called The Beat of a Different Drum about the life and science of Richard Feynman. While I don’t get all the math, I’m reasonably sure I understand enough to know what the problems of quantum electrodynamics were and what kind of new ideas were used to sort of fix those problems. However, they keep refering to the electron as a “point particle” in the sense (I’m assuming) that it has zero size. And because of this there were all these infinities popping up in the equations for the self-energy of the electron.

Now, my first question is: Why was this an assumption at all? The uncertainty principle was well-established by this time, and obviously that disqualifies a particle that is confined to a space of zero dimension. Also, just intuitively, since an electron has mass, it seems obvious to me that it would have infinite density, which I don’t know was a problem or not. It just seems (to ignorant me) that a lot of the difficulties could have been solved just by assuming that electrons had some finite size.

My second question is about “renormalization”. My understanding is that the equations were flawed in some way, and that to make the answers correspond experimental reality various tricks were used to “fix” the equations. This question comes in two parts, I guess.

  1. Isn’t this pretty arbitrary? I mean if I use F=ma and I get the wrong answer, I can’t just say, “well, in this instance, force is proportional to velocity instead.”
  2. That said, does renormalization correspond to something physical? Can we say, “By subtracting this term we are in essence saying such-and-such field can’t interact with this one”? Or something to that effect?

Sorry, by the way, I seem to write very long posts when I’m interested in something.

You aren’t the first to ask about renormalization. But it does correspond to mathematical reality:

From Scientific American, August 2006
The Geometer of Particle Physics , by Alexander Hellemans

The quote from Scientific American can give the impression that 't Hooft and Veltman were the developers of renormalization. Although I don’t understand QED, my impression is that the first work on renormalization was done by Feynman, Tomonaga, Schwinger, and, I’m sure, others. This work would have started in the late 1940’s IIRC.

In particle physics a “point particle” is one with no internal structure. This distinguishes the electron (a fundamental particle in the Standard Model and according to the current experimental evidence) from particles like the proton and pion, which can be thought of as composite particles made of quarks and gluons and held together by the strong force.

Properties of particles can be measured by scattering: that is, measuring the distribution of particles resulting from collisions between some probe particle and the particles of interest. If you imagine that the two “colliding” particles are simple point particles (say, for electron-muon scattering) you get a relatively simple result. If one of the particles has internal constituents, you get a more complicated result, from which you can derive “form factors” that can be used to determine properties of these constituent particles.

The distance scale probed by a particle in a scattering process is basically proportional to the inverse of its energy; by using a higher-energy scattering beam you learn more about the short-distance structure of the particle. (The “form factors” all tend to look the same at low energies.) So particles which are empirically classified as “point particles” may be reclassifed later if higher-energy scattering experiments find structure where the earlier experiments saw none.

As for renormalization: It’s not really arbitrary, and it does really have justification, though it’s often presented as somewhat magical. Historically it was some time before the heuristic tricks being used were put on a sound theoretical footing, so this is a somewhat reasonable presentation in a biography of Feynman.

The basic problem that renormalization tries to solve is that many of the quantities you try to predict using a naive quantum field theory end up being divergent integrals. (That they actually become infinite, rather than just finite but wrong, probably is fortunate. It’s easier to start questioning some of your fundamental assumptions when the predicted result is infinite, than if it’s just finite-but-wrong.) In QED, the problem is sometimes called an “ultraviolet divergence” in analogy to the problem of blackbody radiation. In an ultraviolet divergence, the integral of what should be a finite, measurable quantity diverges at the high-energy limit. The basis for the resolution of such problems is hinted at above. Remember that higher energies correspond to shorter distances. So writing an integral that extends all the way to infinite energy is rather presumptuous: we’re assuming that our integrand is exact, all the way down to zero distance. Empirically we have no basis for this assumption, so quantities that we tend to think of as constants may start changing. Once you start expressing everything more carefully in terms of what you can actually measure, the infinities cancel themselves out.

The mathematicians in the audience are still waiting.

Could you address the Scientific American article? What about the “mathematically rigorous underpinning” cited there do you find not convincing?

I can’t address the article because my complaints aren’t with what’s in there. Connes does great work.

However the article isn’t a work of mathematics or of physics. I know the mathematics and the physics behind the ideas. They’re great ideas, but they still leave a lot of weird things lying around, like analytic continuations picking a branch essentially by intuition, or some really hideous “Euler-style” sums like 1 - 1 + 1 - 1 + … = 1/2.

Basically, NCG adds to the Standard Model, which brings along all the mathematical insanities of the Model as it stands. I don’t deny that it’s managed to make fantastically accurate predictions, but so did the calculus of Newton and Leibniz for hundreds of years before being put on a properly stable mathematical bedrock by Weierstrass and others.

Probably one of the renormalizations you’ve seen is the renormalization of the mass of the electron. In classical electromagnetism, if you have a lump of charge, the electric field around it will have some amount of energy, which depends on the amount of charge and the size of the lump. The exact formula will depend on how the charge is arranged in the lump, but it goes something like q[sup]2[/sup]/r, where q is the total charge and r is the radius of the lump. This means that, if you have a finite amount of charge in a region of zero size, the energy in the electric field will be infinite. But any energy in a particle’s electric field must be part of the mass of the particle, which suggests that electrons would have infinite mass, which they clearly don’t. The solution in QED is renormalization, where the total mass consists of a “bare mass” (the mass the particle would have if it weren’t for its charge) and the electrostatic mass. Effectively, the bare mass must not only be infinite, but negative, such that it cancels out almost all of the infinite positive electrostatic mass.

But, of course, we don’t know that electrons have truly zero size. It’s very difficult in physics to prove that something is truly exactly zero. Usually, all you can do is put very small upper bounds on things. So it’s possible that the electrostatic mass, and therefore also the bare mass, is finite (though perhaps very large). But given the bounds we have for the size of the electron (effectively, we’ve seen things get that close to the electron without going “inside” it), we can put a lower bound on the electrostatic mass… Which is still much higher than the observed mass. So even if the bare mass is finite, you can’t get around the fact that it’s still negative.

You also asked about infinite density. According to classical (i.e., non-quantum) general relativity, infinite density would be a problem, since the particle would become a black hole before it got to that point. But first of all, one wouldn’t expect classical GR to have any relevance to an electron; you’d need a (not yet developed) theory of quantum gravity in such a situation. And secondly, while the bounds we have on electron size are tight enough that there’s definitely some weirdness with the electric field, they’re not nearly tight enough to be sure that there’d have to be any gravitational weirdness.

I’m sorry I haven’t been back to my own thread in awhile, I’ve been busy. Thanks for the responses, these things are getting clearer, but I suppose there is a reason physicists go through all those years of school; if you could figure it all out from a messageboard, what would MIT do to make money?

So, the electron point-mass thing is understandable, I just was getting the wrong idea from the phrase “point-particle”. But does anybody have any examples of the actual math involved in renormalization? I won’t understand all of it, but I could get a gist by seeing the math involved. I mean, I’m in a differential equations class right now; so I just might understand some of the concepts without understanding how to solve the equations.
I gotta go now though.

P.S. I don’t know if I should do this, or if I should open a new thread, but does anybody know of any sites that could help me figure out Dirac’s bra-ket notation? I’ve been curious but the only thing I could find was a convoluted wikipedia article, and about a thousand other sites that just cut and pasted the wikipedia entry.

Basically it’s all about picking constants of integration. Here’s an example that may or may not have anything to do with a real physics model:

Consider f(x) = 1/x, and take its integral from 0 to 1. Technically, this diverges, but let’s just consider first taking the antiderivative: ln(x) +C

Now, the rough idea is that if we pick an infinite value for C (no, this doesn’t make sense) we can make the integral have a finite value. Basically, we say that C is whatever we have to add to the infinite integral to get the finite value we observe in nature. Whenever we see the same C show up elsewhere in the model, we use the same infinite value. Somehow this sort of thing actually gives answers that accurately predict the results of experiments. Mathematically it’s nuts, but physically it works out right.

It must be, because now I’m even more confused. How do you have a constant of integration with a definite integral? I mean, the way I’m used to doing integrals, if you have limits the constant goes away. I also don’t understand how you can pick a finite constant that cancels the infinity.

You’re used to limits of integration where the integrand doesn’t have a pole. And as I said, you don’t pick a finite constant. C is infinite, but just infinite enough to cancel the infinity from the integral. The thing is that the same C will show up in other calculations you do later, and you use the infinite value you got from your “test case”.

Basically, we’ve got a bunch of quantities we can experimentally measure. For instance, the charge on the electron. Now what we measure is not really the electron’s charge, but the charge “shaded” by various interactions all around it. For instance while we measure a virtual particle-antiparticle pair could zip into existence, the positive partner closer to the electron, and then zip back out. That gives us a little tweak to the observed charge.

So we add up all the possible tweaks weighted by their probabilities. Effectively we end up doing an integral, and it turns out that this integral diverges. So whatever finite value we pick for the “bare” electron charge, we’ll get an infinite value for the predicted observed charge.

“But who said we needed a finite bare electron charge?” ask the physicists. We’ll assume the bare charge is infinite too, just infinite enough to cancel out that integral and give us the value we observe in nature, since that’s what the integral is supposed to represent. The mathematicians say, “we’re not scientists, but isn’t it a little backwards to define the result of your calculations to be what’s observed rather than making a real prediction?” and are promptly bound and gagged for impertinence.

The physicists then take this infinite bare electron charge and stick it in wherever they need a bare electron charge elsewhere, and it somehow manages to give finite answers in all those calculations as well, and remarkably accurate ones. The mathematicians are left indignant that the whole thing is so unrigorous, but have some small solace in the fact that the physicists had no used gym socks among them to use for gags.

I don’t understand the physics enough to know if what I am about to say is relevant, but I can give a mathematical example of infinities cancelling. To a mathematician it is all nonesense of course, but it appears to work.

We know that, say, the integral from a to b of 1/x is ln(b) - ln(a), when a and b are both positive. If a and b are both negative, then the integral is ln(-b) - ln(-a). If you look up the function 1/x in an old-fashioned table of integrals you will probably find the formula ln|x| + C. The correct formula is only when a and b are both non-zero and have the same sign and it is ln|b| - ln|a|.

Now what happens if ab < 0? That is, a and b are both non-zero and of opposite sign. For perfect definiteness, let us see what, if any, meaning we can assign to the integral from -1 to 1. A simple graph will show that what you have is two congruent regions of infinite extent one above the x-axis and one below. They look like should cancel. And in fact if you choose some small epsilon and calculate the integral from -1 to -epsilon and the one from epsilon to 1 and add them you always get 0, no matter how tiny epsilon is. Just like ln|1| - ln|-1| = 0. So can you conclude that the integral is 0? Well no and yes. The correct answer is no, but if you are in a speculative mood, you could try yes and see where it leads. This is a simple example of how infinities could be cancelled.

This is the spirit in which Euler concluded that
1 - 1 + 1 - 1 + … = 1/2 (at least it Cesaro sums to 1/2) and that 1 + 2 + 4 + 8 + … = -1 (which is valid in the 2-adic norm characterized by considering powers of 2 to be “small”. He also summed 0! + 1! + 2! + 3! + … and got a finite sum (I have forgotten what it was–something involving pi or e no doubt).

As I said, I have no idea if this has anything to do with the physics, but does involve cancelling infinites. Oh yes, then there are asymptotic series, which diverge but the first few terms give the right answers. I taught them once, it all seemed like mumbo-jumbo, I never understood it at all and I cannot now recall anything about them.

Ah! That reminds me of a simpler variant that I’d forgotten to mention.

So, consider integrating 1/x from -1 to 1. We take an antiderivative ln|x| + C, right?

But where does the C come from? It’s the fact that we can shift the antiderivative by a constant and still get the same derivative. When we look closely, though ln|x| has two parts, and we could shift them independently. That is, the most general antiderivative is ln|x| + C[sub]1[/sub] + C[sub]2[/sub]sign(x), where sign(x) is 1 when x is positive and -1 when x is negative. We shift the right part of ln|x| up by C[sub]1[/sub]+C[sub]2[/sub], and the left by C[sub]1[/sub]-C[sub]2[/sub]. By choosing those two constants right we can get out whatever finite answer we want to match to an experiment, and then use those choices to make predictions about other experiments.

Of course, this involves invoking the fundamental theorem of calculus (integral is difference of antiderivative evaluations) where the interval in question doesn’t satisfy the hypothesis of the theorem, but that sort of petty niggling never bothered a real physicist.

Actually, in 2-adics wouldn’t you get 1? You get -1 by noting that

1 + 2 + 4 + 8 + … =
1 + 2 + 2[sup]2[/sup] + 2[sup]3[/sup] + … =
1 + x + x[sup]2[/sup] + x[sup]3[/sup] + … = (at 2)
(1-x)[sup]-1[/sup] = (within radius of convergence of the series (less than 2))
-1[sup]-1[/sup] = -1

So we see a formal series as a power series which doesn’t converge at 2, but take the formula it converges to where it does converge as “what we were really meaning”, and evaluate that at 2. QFT does this sort of lunacy all the time.