If you put it like that, nothing. You detected the neutrinos, later you decide not to generate any. What did you detect then? Who cares, causality isn’t working anyway.
But it is interesting though. Do we have a way of describing what a universe without causality is like?
I don’t see how they could. The source and detection are 730 km apart, so the neutrinos are passing through the Earth. Any other particles wouldn’t make it.
Actually, Kamioka II, one of the three detectors that observed the Supernova 1987A neutrinos, was turned on two years prior. That detected 11 neutrinos. The other two, IMB and BNO should have been running, detecting 8 and 5 respectively. IMB was looking for proton decay, not neutrinos. Without a supernova occurring three hours later, maybe the two that were running wouldn’t have realized a few extra detections were more than a random occurrence. If the data is available, it would be interesting to re-examine the period three to five years prior to the supernova.
I would have thought the protons were already traveling at nearly c, in the right direction and not very far, so any errors in timing from detecting the protons at the source instead of neutrinos at the source would be small.
I look at that 730 km distance that they estimate only a 20 cm error on, and the correspondingly large total time travel and small estimated error, and the mind kind of boggles. But I’m not in the physics community. Maybe that sort of accuracy across long distances and over a long time is commonplace.
They could do tests at shorter distances and see if the results correspond. They could use the same methodology to test the speed of other particles which have other means of measurement. But as Pasta mentioned, these projects can be very expensive. They probably did do all they could with the available funding.
I still don’t see how they could test this with any other particle, even at a shorter distance. For their setup, they’re finding a time difference of 60.7 ns, with statistical uncertainties of 6.9 ns and systematic uncertainties of 7.4 ns. I don’t think those uncertainties will change much with distance. If you cut the distance by a factor of 10 to only 73 km, the effect you’re measuring has now fallen within the uncertainties, and I think you’re still too far to use anything but neutrinos.
Ok, I have no idea if other measurements are feasible. Just in terms of general methodology I’d ask if there were other ways to validate the measurements. And maybe there aren’t. In that case it’s going to take repeated tests to even get a sense of whether they’ve found something.
Put it this way: I would consider invisible trickster gremlins who enjoy confusing physicists to be a more plausible explanation for these results than that the neutrinos were actually traveling at speeds greater than c. And of course there are a lot of explanations more plausible than gremlins, too. You’d have to rule out all of those, very definitively, before I’d even consider superluminal neutrinos.
So again, assuming this is real, what are the practical applications? Lower latency communication/data transmission? Sending information into the past to prevent cataclysmic events? Will philosophy and physics finally converge?
Really? I mean why do you consider the possibility of tachyons to be so incomprehensible?
I can understand the position that an extraordinary claim requires an extraordinary level of evidence, but the concept of tachyons is well within the bounds of current theory. And potential solutions to the paradoxes involved have been bandied about for decades.
Question. Imagine that supraluminal particles are confirmed. From the POV of the supraluminal particle are we travelling backward in time?
It’s not the travel of the protons that introduces the large offsets but rather the details of the proton-beam-specific instrumentation. In a neutrino-to-neutrino version, the instrumentation would be the same on both ends of the trip.
It isn’t necessarily commonplace, but that’s because experiments aren’t commonly 730 km in size. But, the spatial precision on its own isn’t crazy. They are using commodity precision GPS receivers operating in “common-view” mode (i.e., operating with an identical satellite constellation). In fact, the 20 cm comes almost entirely from the subsequent transfer of the surface GPS measurement down to the underground detectors, which involves surveying down a long tunnel and into the lab.
I don’t have a problem with the existence of tachyons per se. I have a problem with tachyons which are capable of interaction with normal matter. Once you’ve got those, everything goes out the window.
For that solution, the velocity of the neutrinos would vary over the course of a day, and over the course of a year. They did examine that a little bit in the paper:
I have no idea, though, if that level of variation (or lack of variation) is consistent with the velocity variation implied by the Earth’s motion relative to any absolute frame of reference.
It would be nice if they had also binned the data based on the Earth’s orientation relative to the stars, not just day Vs. night. That would be more directly relevant.
Is it just me, or is anyone else imagining a Sheldon-like figure at CERN dreaming of Nobel prizes and scientific immortality when it is just his mates Leonard and Howard turning the electric can-opener on and off?
How about the Scharnhorst effect as a possible explanation? It would essentially amount to photons travelling slower than c in vacuum because of a small vacuum refractive index due to vacuum polarization, which the neutrinos wouldn’t see, so even with a small rest mass, they could conceivably travel faster than photons in vacuum; this would also be in line with the 1987A neutrinos being slower, i.e. closer to the photon speed, due to their lower energy…
A quick calculation based on Scharnhorst’s paper (eq. 10), done on the basis of the gedankenexperiment of making the distance between two Casimir smaller and smaller, gives a correction of about 7 * 10[sup]-7[/sup], which smaller than the violation observed by OPERA (2.45 * 10[sup]-5[/sup]), but I only calculated the effect for two plates roughly the electron’s Compton wavelength apart, which is where Scharnhorst’s approximation breaks down; the ‘true’ vacuum c would be at zero plate separation, where one would expect a higher correction, so there might be something to this, just perhaps.
If they are, in fact, superluminal, which as I said, I strongly doubt. They’re certainly interacting with normal matter, or we would never have been able to do the experiment.