Can neutrinos travel Faster Than Light

There is a distinction between the accuracy of the GPS data, and the accuracy of the USAGE of that data in an experiment.

You are arguing that the GPS data is already very accurate so it couldn’t be a problem.

His paper is arguing that the way in which the GPS data was used contains an error.

I thought that the GPS satellites just put out a regular signal, and the calculations, including relativity were all done on the receiving end. It doesn’t make sense to me, to do the calculations on the satellite itself.

From Wevision, a redoubtable and rich site on the retina and vision: There will be more data in a subsequent message. I lost, but will find, some PDFs on minimum number of rods to register an observer’s sense of an event – 5 to 14according to an article segment below – as well as the temporal resolution of the conduction, which I think is important. There is an article in WebVision on that, but I can barely understand it.:frowning:

What can be gleaned from the data, aside from the fact that they’re cool? An honest question.

Rods peak in density 18o or 5mm out from the center of the fovea in a ring around the fovea at 160,000 rods/mm2. (Fig. 5)
No rods in central 200 µm.
Average 80-100,000 rods/mm2
Rod acuity peak is at 5.2o or 1.5 mm from foveal center where there are 100,000 rods/mm2 (Mariani et al.,1984).
Fig. 5. Density plot of rods and cones on the horizontal meridian of the human retina (59 K jpeg image)
There is a peak of the rod photoreceptors in a ring around the fovea at about 4.5 mm or 18 degrees from the foveal pit. I believe that the number and separation of rods is important in the projections of light distribution.
Rods convey the ability to see at night, under conditions of very dim illumination. Animals with high densities of rods tend to be nocturnal, whereas those with mainly cones tend to be diurnal. The nature of dim light is important both to physicists and to biologists. In 1905 Einstein proposed that light propagated only in discrete irreducible packets or quanta. This explained the non-classical features of the ‘photoelectric effect’, a process by which light releases electrons from metal surfaces, described by Heinrich Hertz in 1887. Rods are so sensitive that they actually detect single quanta of light, much as do the most sensitive of physical instruments. In 1942 Selig Hecht argued that human rods must be capable of detecting individual light quanta because light flashes so dim that only 1 in 100 rods were likely to absorb a quantum were yet reliably seen by careful observors. A century after the original discovery of the photoelectric effect it has become possible to record directly the minute electrical voltages in rods induced by absorption of individual light quanta. An excellent example is shown in the suction electrode recordings of monkey rods by Schneeweis and Schnapf (1995) (Fig. 22). Each dot in the figure below represents delivery of a very dim pulse of light containing only a few quanta. Voltage responses appear to come in 3 sizes: none, small, and large, representing the detection of 0, 1 or 2 quanta in each flash. The granularity of response to dim light stimuli is evident.

Fig. 22. Photovoltages recorded in monkey rods

Rod sensitivity appears to be bought at a price, however, since rods are much slower to respond to light stimulation than cones. This is one reason why sporting events such as baseball become progressively more difficult as daylight fails. Both electrical recordings and human observations suggest that signals from rods may arrive as much as 1/10 second later than those from cones under lighting conditions where both can be simultaneously activated (MacLeod, 1972).

  1. Can we see a single photon?

The minimum number of photons required to produce a visual effect was first successfully determined by Hecht, Schlaer and Pirenne in a landmark experiment (Hecht et al., 1942). Human subjects were allowed to stay in the dark for 30 minutes to have optimal visual sensitivity. The stimulus was presented 20 degrees to the left of the point of focus so that the light would fall on the region of the retina with the highest concentration of rods. The stimulus was a circle of red light with a diameter of 10 minutes (1 minute=1/60th of a degree). The subjects were asked whether they had seen a flash. The light was gradually reduced in intensity until the subjects could only guess the answer. It was found that between 54 and 148 photons were required in order to elicit visual experience. After corrections for corneal reflection (4%), ocular media absorption (50%) and photons passed through retina (80%), only 5 to 14 photons were actually absorbed by the retinal rods. The small number of photons in comparison with the large number of rods (500) involved makes it very unlikely that any rod will take up more than one photon. Therefore, one photon must be absorbed by each of 5 to 14 rods in the retina to produce a visual effect.

In the same publication (Hecht et al., 1942), Hecht and co-workers determined the visual threshold of human vision by the famous “frequency-of-seeing-curves” experiment. The theory is that photon absorptions vary according to a Poisson probability distribution. If a is the average number of photons absorbed per flash, the probability Pn that any number n will be absorbed is: Pn = an/(ean!). Some of these Possson integral curves (n from 1 to 9) were shown in Figure 5A. By measuring the frequency of seeing against the logarithm of the brightness experimentally, one can fit the experimental curve with one of the probability distribution in Figure 4A to reveal the value n, which lies between 5 and 8 (Figure 4B). This agrees well with the value 5 to 14 photons absorbed by rods!

Figure 4. Frequency of seeing curve experiment. (A) Poisson probability distribution. For any average number of quanta (hv) per flash, the ordinates give the probabilities that the flash will deliver to the retina n or more photons, depending on the value assumed for n. (B) Relation between the average energy content of a flash of light (in number of (hv) and the frequency with which it is seen by three observers. From Figs. 6 & 7 of (Hecht et al., 1942)

nm

Pasta, perhaps this is in your long post upthread, but two basic questions:

  1. Given your calculations of overall eye-liquid in the world, why would ca. 100 photons appear in one particular cross section (one human eye). In these gigantic neutrino detector tanks, such events are so rare to begin with, and surely they are scattered around the pool?

  2. The details of the size and density of the rods compared to the paths of the photons makes me wonder, couldn’t a photon simply zip right through – manage to miss them-- the picket fence of rods?

One neutrino event produces many photons (indirectly, via a charged-lepton intermediate).

Observation from the peanut gallery:

Wooooowee. This stuff is a definite mind stretcher. My panting catching up has the following set of arguments bobbing up and down.

The possible sources of error include the precision of locations and distance between the emission, and detection sites themselves. And the actual time that the emission took place from the frame of reference where the detector is. And difficulty measuring some sort of time dilation differential experienced by the clock at the detection site with respect to the clock at the emission site. (Couldn’t we just drive a pair of them back and forth three or four times and check?) And a variable caused by the GPS signal source moving in a third reference frame. (I don’t understand how that continues to matter once the clocks and locations at both ends are established, though.)

So far it seems that the data counters and software have been discounted as the likely source of the sixty nanosecond variance, although I am pretty much taking that one on authority due to absolutely not getting the original premise, or the refutation.

Then there is the data, or absence thereof from astronomic observations supporting the same type of event in the one case we have where location and time of emission of neutrinos at significant distances should have been notable. (This, by the way seems to me to be a really good place to start looking for confirmation, although that means doing a lot more gamma ray burst surveys, with multiple rapid response observation sources for precise verification.) And now there seems to be an absence of something we should expect, if the thing we never expected to happen actually was happening. (I am very vague on this one.)

All this makes me rather gloomy. Every single step of this will be expensive, and the news on the economy doesn’t seem supportive for massive investment in theoretical physics and astronomy research.

Oh, and my only question is: how far behind am I, layman wise, of keeping up with this?

Tris

Here’s a blog describing an effort underway at OPERA to eliminate some possible causes of error. Lots of pictures, so accessible even if you don’t know the physics. He says they should have results in a few weeks.

They did that (except they called them “mobile time transfer devices”, or some such). And had both the Swiss and German standards institutes double-check them.

Great link, thank you very much. Very smart follow up experiment, too.

Couple of weeks, huh? I am hanging on every word.

Tris

That’s certainly a worthwhile experiment, given the implications of all this, but with how well the pulse shapes matched, and for a pulse shape that’s very close to square, I really can’t see how it could be either of the problems that test would rule out. Personally I suspect it’s a systematic error in either the distance or in the clock synchronization, but that (especially the distance one) is very difficult to test for.

If it turn out to be another “measured with yards instead of meters” thing, I’m gonna be really pissed!

:eek:

c is approximately 2.998E8 m/s. But c is really 2997924568 m/s. The difference is 7542 m/s.

The relative error is 7542 / c = 2.5E-5. The reported relative “error” is about 2.5E-5.

Again, I say :eek:

Boy, that would be some egg on their faces…

ETA: I don’t really think they did that…

It seems to me that if there was some sort of natural sorting which reliably favored the earlier edge of the pulse, there have to be implications about neutrinos having previously unsuspected characteristic interactions when passing through “solid” regions. While that is way less exicting than going faster than light, it would certainly be worth investigating. Does seem unlikely, though. Why, it seems almost as unlikely as . . . uh, never mind.

Tris

Local news had another story about this today.

http://www.edmontonsun.com/2011/11/18/einsteins-theory-may-have-been-wrong and this time even included a link to their new study here http://arxiv.org/abs/1109.4897v2. Still not independent, but apparently eliminating another margin of error.

Perhaps I’m about to play an utter fool, but why couldn’t a flaw in the distance calculation explain the anomaly? The anomaly is that
t < c / x
where x is the distance, through rock, from Geneva to Rome, right? But how is that distance measured? The two endpoints can be located precisely via satellite, and the distance calculated with simple trig, but that assumes space is Euclidean. Instead, I thought space was supposedly a complicated affair with twisty dimensions, perhaps caused by the atomic nuclei along the neutrino’s path from Geneva to Rome, affecting such a distance.

How is distance x calculated?

If that were the case, and we weren’t able to accurately measure the distance, I assume that would affect far more than this experiment and we’d know we aren’t able to accurately measure the distance.

I agree. If we discovered we are unable to measure relatively short distances accurately, that would shake the foundation of… pretty much every scientific discipline there is. Even politics – imagine if national borders aren’t where we thought they were.

Distances are normally measured through air or vacuum. Aren’t distances through rock just deduced with trig? (And of course I assume it’s the rock’s concentration of mass which warps space.) The reported deviation in neutrino velocity (and thus, the distance, in my hypothesis) is extremely tiny; wouldn’t experimental errors dwarf the effect in anything not involving speed-of-light?

I dunno. I just can’t believe this is the only time we’ve ever needed to measure distances this way for either experimental purposes or even existing technology.

Maybe this is a first. I honestly don’t know. It just feels unlikely to me.