In the LIGO experiment by which gravity waves were confirmed, they measured the length of each 2 1/2 mile leg with a precision of 1/10,000 of the size of a proton. I could understand measuring to a small percentage of one light wavelength, but this? How?
The instrument is setup so that the laser makes approximately 75 trips back and forth before being measured. Each leg is 4 km, so the total trip is approximately 600 km. LIGO’s laser has a wavelength of 1064 nm, so to reach the desired sensitivity of 10[sup]-22[/sup], it needs to detect variations in the output of ~10[sup]-10[/sup]. The laser light is interfered with itself to compare the results of the two arms. The laser input power is 200 watts, which means that variations on the order of 20 nanowatts need to be detected. Although small, this is by no means an unmeasurable amount of light.
The real difficulty is in eliminating noise sources. The whole thing needs to be in a hard vacuum, since even the smallest amount of air would distort the beam. Thermal fluctuations need to be kept to a minimum. Vibrations from the ground can’t be transmitted to the mirror system, so they need a sophisticated suspension system. The laser itself needs to be incredibly stable. And even at that, there is a huge amount of noise, so they need to correlate the results from two geographically distinct sites so as to eliminate any local noise sources.
(I’m no expert on LIGO and made some fairly crude assumptions above, so I’d certainly appreciate corrections)
…And even with the two-site correlation, there’s still a huge amount of noise, so they have to integrate the data stream against templates to pick out the signal.
Oh, and don’t forget that they have to make the mirrors perfectly flawless, or flaws in the mirror would provide another noise source, and you also have to make them heavy enough that the light pressure from the beams themselves won’t also be an overwhelming noise source, so you end up with 40-kg single-crystal sapphires for mirrors.
Indeed. It reminds me a bit of GPS. The signal from GPS satellites is so weak on the ground that it is almost completely buried in noise. To send one bit of information, they actually send a 1023-bit code. The receiver matches the incoming signal against one of the several possible 1023-bit patterns, and registers a match if the correlation is strong enough. Although any one bit has only barely over a 50% probability of matching, the probabilities are multiplicative and generate a high confidence over the full signal.
It seems to be the same basic principle here, except that presumably, they generate their set of templates based on simulations of expected astronomical phenomena, allowing for variations in the mass of the inputs. They are likely to be most interested in black hole mergers of a few tens of solar masses, and can generate a set of templates within that range of possibilities.
Yes, but also varying spins and spin orientations, and orbital parameters, in addition to the masses. And there are also templates for mergers of neutron stars and white dwarfs.
They run both template-based and template-free searches. GW150914 is seen clearly in both style searches.
Huh, that’s new, then. As of when I was still tangentially connected to gravitational wave research (as in, I wasn’t doing it myself, but attending seminars every week by those who were), the only template-free techniques available would only catch something ludicrously loud.
Each passing year, I find this thought coming to mind more and more often…
I (somewhat) followed the processing of the signal available here. It’s heavily “photoshopped” but wholly legit. Still amazing.
Thank you for the responses.
So nobody knows how they measured 2-1/2 miles to the nearest 10,000 protons?
They don’t measure the total distance, they measure the change in distance.
Read up on interferometry, the “I” in LIGO. For light, you typically split a wave and send them off in different directions and bounce them back. When the waves re-merge you get an interference pattern. A tiny change in distance for one of them results in a shift in the interference pattern. By counting how many lines in the pattern it shifts, that gives you a key variable for calculating the distance shifted. The longer you make the distances traveled, the tinier of a shift in distance you can measure.
The basic concept isn’t rocket science. I did interferometry measurements in a Physics undergrad lab in college. OTOH, LIGO required many tricks to measure a tiny effect, e.g., to get long distances they had to bounce the light waves back a forth a lot. But the basic principle is easy to understand for a 2nd year Physics student.
Maybe they had two black holes that both went up to 11 solar masses.
In my rubbernecking amazement of the entire event, it strikes me how such a profound moment in experimental physics was determined by the same physics as the Michelson-Morley experiment and the goodbye kiss to the ether.
So that would mean the two legs don’t have to be the exact same distance, just some multiple of 1064 nm?
I think this would be correct if the laser were perfectly coherent and stable. But in reality, they don’t behave perfectly. And if the 2 legs of the interferometer were different lengths, any variation in the laser output will manifest as noise (or fake signal) - you would be comparing light waves emitted at slightly different times, so they won’t cancel out perfectly. So, the more equal you can make the lengths, the lower the noise. I don’t know how close is close enough for LIGO.
Exactly, but in case of LIGO, you aren’t counting how many lines in the pattern it shifts - you’re looking for a change far smaller than one line (one wavelength). The interferometer is set up so that the two beams cancel out perfectly. As Dr. Strangelove explained above, you inject a 200-watt beam into this interferometer, and it’s aligned so perfectly that they cancel out perfectly and an almost undetectable amount of light comes out. When the gravitational wave came through, this “almost undetectable” signal went up by about 20 nanowatts, because the two waves shifted by a tiny fraction of a wavelength.
I could be wrong, but I am fairly sure it doesn’t particularly matter how precisely equal the two legs are. Now there are probably some secondary effects where having them as close to equal as possible minimizes any noise and or optimizes the signal strength.
But, again, the legs themselves probably don’t have to made to some absurdly high precision level of equalness.
For a more run of the mill interferometry, inches to feet diffrence probably doesn’t matter.
For “run of the mill interferometry,” I think the length difference just need to be short compared to the coherence length of the laser.
I’m having a hard time finding any official documents on whether the two arms are set to the same number of wavelengths or not.
On one hand, scr4 is right that if the coherence length is long compared to the arm length difference, it doesn’t matter too much. And the laser used by LIGO has a very long coherence indeed.
On the other hand, though, it would be relatively trivial for them to lock in a specific length. They already have an active, servo-controlled suspension system. They could use time-of-flight or multi-frequency coherence to dial in a specific number of wavelengths.
This guy on Stack Exchange seems to think they don’t bother, since there are larger sources of measurement error around. Googling his name, it seems that he does work for LIGO (there are papers in his name).
I had the same thought in a different thread. It’s rather astonishing how such a simple instrument can tell us so much about the universe.
In other news, Special Relativity has now been proven to one part in 10[sup]21[/sup].