I think a more meaningful way of putting of what I was trying to get across (and thanks to both of you for your more useful and specific contributions) would be the following statement:
The precision that the two arms need to made equal to each other is much (much much?) greater than the length of the “distortion” (or change in the length of the) that the whole system is capable of measuring.
One of the detectors also has (or had; they might have removed it with the latest set of upgrades) a second set of mirrors halfway down the arms. They can also do runs where one long leg is compared to one short one. It’s not as sensitive that way, but can be done for testing the equipment. So no, the two arms don’t have to be even close to equal.
I though the key to this was that there are multiple experiment setups running simultaneously. One setup shows activity, ok, maybe we didn’t isolate it enough from the environment and we’re being fooled. But multiple setups showing the same result…winner.
Slightly onto a tangent, human body temperature in America is an example of “insane precision”. The normal body temperature was defined as a range of values that rounded off to 37° Celsius. Exactly 37° C converts to 98.6° Fahrenheit, which implies accuracy to within 1/20 of a degree Celsius, which was never intended nor implied by the rounded off number. But American moms presume 98.6° to be a value of three significant digits, and consider a temperature of 98.7° to be "a little high"and indicative of a possible disorder enough to keep their kids home from school swallowing aspirins… In fact, any Fahrenheit temperature from 97.7° to 99.5° will round off to 37° C, and is therefore medically defined as “normal”
They also have another, completely different strategy to ensure they aren’t fooled. There is a small team that deliberately injects false signals into the system. They only reveal themselves after the signal has been detected and verified by their internal processes.
It can be tempting for scientists to keep looking into the noise until they find a signal. You’ll always find something if you look hard enough into randomness, but that something is meaningless. But on the other hand, the signal here is buried in the noise and they’ll never find anything if they don’t use some means of filtering out the noise. This trick ensures that they look hard, but not too hard, and that their processes are capable of giving the expected result with a known input.
BwanaBob, that is in fact one key, and probably the biggest single one. But there are so many potential noise sources for an instrument like this, that even if you see something at both sites at once, the most likely explanation is that they both just happened to have some random noise at the same time.
You could do multiple passes this way with even more instruments, except that there are currently only two LIGO sites. There are a few similar instruments in other countries, and they do look at the data from those, too, but none of them are as good as LIGO, so it doesn’t really help much. There have been proposals to build a third LIGO in some other country (most likely Australia), and that would probably be the best bang for the buck for improving the data (a third one wouldn’t cost nearly as much as the first two, because all of the R&D has already been done), but the international aspect makes it politically difficult: The NSF is reluctant to spend money outside of the US, and while the host country could also contribute some share, that then means that you’ve got two different governments you need to beg from.
Isn’t the upgraded VIRGO interferometer similar to the LIGO performance? It’s supposed to start operations later this year, last I heard. And the Japanese KAGRA interferometer should surpass it in a couple more years.