# Measuring gravitational waves

With the awarding of the Nobel prize to the 3 LIGO scientists for spotting gravity waves, I decided to read up it a bit.

If I’m understanding it correctly as gravity waves wash through the earth they distort space by stretching and squashing it. Makes sense, it’s what you’d expect. And it’s measured by bouncing a laser off of two mirrors 2 1/2 miles apart.

But apparently this “distortion is many times smaller than the width of a proton, one of the particles in an atom’s nucleus”.

Um, what!? Many times smaller than the width of a proton? Can they even make mirrors that smooth? It seems like normal geologic forces would move the mirrors more than that. Or the wind. Or the expansion from heat or cold.

That just seems like an unimaginably small distance to measure even for a laser. Is there something I’m missing here or is it really possibly to measure the stretching of the earth to that degree?

Yup. The instruments are hyper sensitive and the scientists are well aware of it and have to account for all sorts of stuff such as you mentioned (I think I recall one saying trucks driving by on an interstate a mile or more away can affect the measurement).

This is one of the reason there are two observatories. Each one will have different “noise” in their readouts so the one bit they both agree on is probably a real signal.

Along these lines, I asked this question on another board but didn’t get a good answer:

When a photon is reflected from the LIGO mirror (like any reflection), it’s due to interaction with electrons in the mirror. But the electrons are “moving” around in the atoms, and the atoms are much larger than the purported resolution of the detector (said to be smaller than the diameter of proton). How can this work? It seems to me that the effective position of the reflective surface can’t be determined to smaller than about the size of an atom, many orders of magnitude larger than the LIGO resolution.

LIGO doesn’t measure the distance between mirrors. It measures the change in the difference in the length between the two arms. So there’s no need to define the absolute position of the mirror surface.

Right, but if the effective reflective surfaces of all four mirrors are wiggling around within a certain distance, how can you measure the difference between the lengths of the arms to a precision 6 or 7 orders of magnitude smaller? The randomness in the position of the mirror’s electrons is not synchronized between the different mirrors, so one arm may appear hundreds of proton diameters longer than the other just due to the uncertainty in the position of the electrons. Someone hypothesized that the quantum uncertainty in the mirror’s electrons sort of cancels out after many trips between the mirrors, but that doesn’t seem convincing to me. Seems like driving across country several times and using the car’s odometer readings to measure the cross country distance to millimeter precision.

The position of each electron may be randomly distributed. But the average position of 10^18 or so electrons on the surface of the mirror is extremely stable.

To expand…

Each arm of the LIGO detectors contains a 4-km long Fabry–Pérot cavity. The laser light bounces back and forth hundreds of times, providing an averaging effect. This temporal averaging together with spatial averaging is (a part of) how you can get such small resolutions.

To give you an idea of the sensitivity of the LIGO, when the first gravity wave was measured, the scientists first had to remove the effect of vibrations from thunderstorms in Africa-for a sensor in Louisiana. The thunderstorm signal was so (relatively) strong that it was easy to remove from the data.
The sensitivity of the LIGO is just crazy.

Building the LIGO was as much a management achievement as a scientific one. Dr. Barish shared in prize in part because of his skill at managing large projects. For example, in the early 90s LIGO went to NSF and asked for 100’s of millions of \$-for a device that they knew and explained up front would not work. But the only way to find out what needed to be fixed was to build a system-then promptly tear it down and fix the parts that didn’t measure up. Now THAT is an accomplishment!

And far be it for me to challenge Pasta on a physics topic, but it is my understanding that the sensitivity of the interferometer is directly proportional to it’s length. By bouncing the laster up and down each leg hundreds of times, they multiply the effective length of the beam from 4 km to hundreds of km. It isn’t to average out the signal (which I am sure is done though far more carefully than simply calculating the mean).

My favorite example of a noise source that they actually have to deal with is tumbleweeds. When a tumbleweed blows up against the Hanford building, they can actually detect it gravitationally.

And the machines are full of all sorts of noise sources, with trade-offs between all of them that would drive most folks, even most physicists, insane. For instance: The exact count of the number of photons in the laser beam will naturally randomly vary, with a standard deviation of approximately the square root of the number of photons (this is called “shot noise”). You can decrease this shot noise by increasing the power of the beam (and hence, increasing the number of photons). But there’s also noise from the force of the photons hitting the mirror, and this increases with beam power. So they have to set the power of the beam to a point where the shot noise and the light pressure noise are equally significant.

We wish. Coincidence matching is an essential first step, but there’s so much noise and so little signal that, even with a coincidence, it’s still vastly more likely that it’s just a random happenstance than that it’s a real signal. This is one of the reasons I keep saying that we need more detectors, spread out over more of the Earth.

It’s probably been posted before but a 9-minute video for laymen takes an interesting look at the “absurdity” of measuring gravitational waves.

The distance discrepancy is “like measuring the distance from Earth to Alpha Centauri and finding a difference smaller than the width of a human hair.” Just to get that big a difference, they had to luck out and observe an event which “for a tenth-second generated more energy than all the rest of the observable Universe combined.” :eek:

Another example of exemplary management was the project timeline. Right from the start, they had not only the device’s current sensitivity curve, but what it would be a year later, and two years, and five years, and so on. And then they actually kept to that timeline. That’s basically unprecedented for a project of this scale: There are always unanticipated delays. Except there weren’t, here.

Thank you for posting that. It helped. I’m not really sure layman is right descriptor, but it helped.

Chronos, are you saying that the detection of gravitational waves from black hole mergers is almost certainly not real?

I am sure Chronos will explain his point better than I can, but I interpret his comment to say that coincidence matching alone is insufficient to make a detection. Detection is only possible by matching possible signals to modeled signals. When both detectors detect a signal that matches the same model, that is a detection.

Right. The gravitational waves that we’ve detected are almost certainly real, but it took a lot more than just “both detectors heard something” to come to that conclusion. Both detectors hearing something at the same time happens all the time, and most of those are just random chance. The frustrating part is that probably, some of the coincidences that we end up throwing out almost certainly are real signals, but we don’t know which ones, or even how many of them.

Ah! Thank you, Chronos and rbroome. I should have been able to figure that out from what Chronos said in the first place. Makes perfect sense.

There are enough replies to replies that I won’t try to quote anything, but to be sure:

One does not need to have a model for particular astrophysical signals to find these signals. You can certainly look a bit deeper into the noise if you have a signal model to work from, but the LIGO events so far jump out even without template analysis.

Both types of searches are conducted on the data: template matching searches and model-independent searches for generic detector-correlated activity as seen in the frequency vs time domain. Of note, the first LIGO event (GW150914) was found by the model-independent approach first, i.e. with no input knowledge of what a source might look like. (The subsequent paper describes both searches.)

Thanks! I hadn’t realized that the signals could be that strong. I guess it makes sense that the first detection would be a strong one, but it still seems amazing luck.