The most important question is: how will this affect the new NOVA special Brian Greene is doing? Are they going to make annotations like in YouTube videos? Will they change the name of the special to Inelegant Universe?
Methinks the creationists are going to have a field day with all this ‘doubt.’ Might we try to preempt their rabble-rousing now by saying: all physics can be thrown out the window tomorrow and it still wouldn’t prove the existence of God or supernatural phenomena.
I think this kind of speculation (though I am sure facetious) is seriously premature. First you need an independent confirmation of the results. I very much doubt that will happen.
I’ve looked through that, and I can’t tell which values of c should be c[sub]L[/sub] and which should be c[sub]i[/sub]. Are you saying they are all c[sub]i[/sub], and none of them c[sub]L[/sub]?
It’s also not clear whether some constants, e.g. the fine structure constant, would stay the same or change slightly if c[sub]i[/sub] > c[sub]L[/sub]. If c[sub]i[/sub] = c[sub]L[/sub] is implicitly assumed in QED calculations, I’d be wary of using those calculations in support of c[sub]i[/sub] = c[sub]L[/sub].
Why’s that? I didn’t think the general result – that the speed of light c[sub]L[/sub] depends on vacuum polarizability – is controversial at all; if you cut off some of these modes, such as through Casimir plates, the light should propagate faster than in ordinary vacuum due to QED two loop corrections. (I’ve since found a good overview about this and related results; it seems that the general conclusion is that any negative vacuum energy density causes light to propagate faster, while positive values slow it down. Oh, and here’s a nice general-audience explanation of the basic Scharnhorst effect.)
The question is, can one re-interpret that in such a way as to have the ‘true’ speed of light, i.e. c[sub]i[/sub], be the one where all the virtual modes are ‘cut off’, however one might physically interpret such a situation. The more common interpretation seems to be that in a Casimir vacuum, light genuinely travels faster than light – i.e. c[sub]L[/sub] > c[sub]i[/sub], which doesn’t really seem sensible to me, so this reinterpretation, if it’s sensible, would eliminate both this awkwardness, and at least open up the possibility for neutrinos to travel at speeds c[sub]i[/sub] > v > c[sub]L[/sub].
I’m happy to be corrected here, but should one really be able to use those to determine the ‘true value’ of the speed of light? It seems to me that one should always be able to adopt a Lorenzian interpretation of special relativity (as advocated by Bell in ‘How To Teach Special Relativity’), and assert that the experimental agreement with the idea that nothing moves faster than c is that the probes we use don’t. This is how special relativity is sometimes viewed in condensed matter systems, where there is an underlying, unique frame of reference (the rest frame of the system you’re looking at), but an ‘internal’ observer has no way of observing his motion wrt this frame; it may even be the case that the speed of light is different in two different directions, without this being noticeable from the inside – for anybody who has access only to lattice excitations to measure the speed of light, it will seem uniform in all directions, and independent of the reference frame. So from this point of view, if we make measurements using electromagnetic interactions, we should expect them to come out in agreement with the relativistic invariant speed equal to the speed of the propagation of electromagnetic waves even if that were not the case.
In any case, of course this is a rather fanciful idea, but if the OPERA results should be corroborated and not due to an experimental error – the possibility of which is still the most likely IMO --, this could pose a way to reconcile the results with known physics without breaking too many nice things. Of course, apart from theoretical dodginess, there are some other obstacles to overcome: is it possible to assign to neutrinos a mass in such a way as to have their propagation speed be consistent with both the OPERA and the 1987A supernova results? I.e. can you find values for both c[sub]i[/sub] and m[sub]nu[/sub] such that neutrinos at energies in the tens of MeV (supernova neutrinos) propagate to extremely good approximation with c[sub]L[/sub], while neutrinos at the energies used in the OPERA experiment travel at the observed speed, while being consistent with both the known constraints on neutrino mass and not needing c[sub]i[/sub] to be too high (i.e. within an O(α[sup]2[/sup]) correction of c[sub]L[/sub], which is the typical size of these polarization effects – incidentally, α[sup]2[/sup] = 5 * 10[sup]-5[/sup], which is just the right range for the observed effect)?
This seems like a bit of an unlikely tightrope walk…
I’m saying that most of these calculations involve c[sub]i[/sub], although some may also involve c[sub]L[/sub]. The fact that experiment and theory agree suggests that both quantities are well-known. QED is a relativistic quantum field theory, so c[sub]i[/sub] shows up in fundamental, relativistic ways. Notice, for example, that the Dirac equation has c[sub]i[/sub]'s in it, before any particular potential (electromagnetic or otherwise) is introduced. I agree, though, that a quick glance does not reveal which c’s are which.
First of all, the bit quoted has a typo: “vacuum polarization effects which slow down massive particles” should say “vacuum polarization effects which slow down massless particles”. But I think you figured that out.
The basic problem is that once you have two different fundamental speeds (i.e., speeds which don’t depend on any medium or frame-dependent quantities like energy), you’ve broken frame invariance. If, say, you had some reference frame where the speed of light were 298 million m/s, then you could translate to a different frame where it was faster than that in one direction, but slower than that in another. You’d have one and only one reference frame where the speed of light is the same in all directions, and that one reference frame would therefore be favored.
Now, maybe frame invariance really isn’t absolute, and the Universe really does have one or more preferred reference frames. There are a lot of models which are taken seriously by physicists which hypothesize exactly that (most models with compactified extra dimensions, for instance, can get a preferred frame from the compactification). But like I said, I wouldn’t bet on it.
It seems to me if interacting Tachyons can exist, they have to have already made all sorts of shit happen. Or perhaps if they ever can exist they will have already been about to make shit happen, I am not sure.
I completely missed that there’s already a paper out, analyzing the energy dependence of neutrino velocity in light of OPERA’s results. It seems that there’s almost no way to square the supernova observations with OPERA’s so that the effect grows smoothly with energy, you pretty much need to appeal to something that’s either flavour-dependent (i.e. happens only for certain kinds of neutrinos) or has an onset energy, such as travel through a higher dimensional bulk.
Something that occurs to me (and probably people a lot smarter than me already, if only to dismiss for sensible reasons I haven’t thought of): Is it possible for CERN to reorient the apparatus that creates the neutrinos to point it at another neutrino detector at a different distance?
I’ve seen talk about building another detector right along the path of the beam to eliminate possible errors in timing of the neutrino creation, but would sending the beam to another already-extant detector, (and then repeating the surveying to find the distance to the new site as ridiculously accurately as the distance to the original target site) possibly get a similar check for cheaper?
Aiming at another detector would be a good idea for a number of reasons, one of which would be that it would at least partly address some of the same objections that would be addressed by another detector on the same line. The in-line detector would address those objections better, though, so ideally you’d want to do both. For that matter, you’d also want to do it with different sources, too, including at least one source-detector pair that’s completely independent of this one. I imagine that the folks at Fermilab are already planning such an experiment, especially since Fermilab’s equipment is actually better suited to this problem than CERN’s is.
Well, I’d prefer that there be not just “the independent experiment”, but a variety of experiments, independent or interdependent to various degrees. If Fermilab re-does it with completely separate equipment and techniques, and finds nothing remarkable, that would probably rule out superluminal neutrinos, but it wouldn’t give much insight as to what actually is happening in the European experiment. If, on the other hand, there are a variety of experiments, and (say) all of the ones using CERN’s source all show this effect, but none of the ones using other sources, then that would tell us that the cause of the discrepancy lies somehow with the CERN source.
This generally lost me, but I think I may be picking up some bits and pieces of your post. Please let me know if the following logic matches what you are arguing.
Maxwell’s equations indicated a fixed speed for elector-magnetic waves in all reference frames if it fails to have any other interactions (ie in a vacuum).
Based on this invariance of speed across reference frames Einstein developed the theory of special relativity.
A side effect of this was that in order to preserve causality it was necessary that this electro-magnetic wave speed (ie the speed that was invariant) must be the maximum possible speed that information can travel
We experiment and accurately measure the speed of light in a vacuum to be 2.998…x10^8 m/s.
But what if this vacuum wasn’t vacuum enough to actually allow for the light to completely follow Maxwell’s equations. What if there were Vacuum effects that slowed light down below its theotical EM top speed. Then we could still have Maxwell, we could still have Einstein, and we could still have a maximum invariant speed that is faster than our measurements of light in a vacuum.
Am I on the right track?
I imagine there could be a middle path in which tachyons exist, they interact with matter, but they can’t convey useful information. Say for example that a tachyon transmits the polarity information between two quantum state joined photons.
But think of how many calculations in the recent past that would be off if we used the wrong value for the speed of light. Unless the difference was insignificant until now?
Does a speed of light found to be faster “in the next decimal place” change anything in our understanding deeper than expectations for particle movement speeds?