Most statistics is based on a calculated figure known as the p-value. The p-value has a technical defintion that is a little hard to wrap your head around, but is worth it.
There are two hypotheses. One, known as the null hypothesis (H-sub-0), is the one that says nothing special is going on. The other, known as the alternative hypothesis (H-sub-1), is that there is something significant happening.
**The p-value is the chance that the observed result could have happened by chance alone under the null hypothesis. **
Say it was .0001. That means, if the null hypotheses was correct, we would see somehting like this 1-in-10,000 times if chance alone was responsbile. One in 10,1000… that just seems a bit much to believe that this particular sample was the one-in-10,000, so we conclude it wasn’t just chance, and decide that the alternative hypothesis was correct.
Example: You flip a coin 200 times. You get heads every time.
H-sub-0: Coin is unbiased
H-sub-1: Coin is biased towards heads.
Observed sample is 200 heads. You do some math and you get a p-value of .00000000000000001 or so. Telling you that if the coin was unbiased, you would virtually never ever see results like this. Therefore, we conclude that the coin is biased.
If you’re still with me, you may be wondering, at what point (p-value) do we switch from H-sub-0 to H-sub-1? 1-in-2 chance? 1-in-5? 1-in-100? There is no technical reason to prefer one over another. However, generally, scientists use the .05, or 1-in-20 barrier. If the p-value ends up less than 0.05, they will decide that the experiment showed something of significance. (Note this means that of 20 published papers at the 0.05 level, on average one will be wrong, a “false positive”.)
Still with me? Probably not, but what the heck. The next thing to know is that this p-value is roughly derived from a formula that is something like the difference between (a)what you saw and (b) what you expected to see if the null hypotheses was true, divided some measure of the variance of the sample. (Variance means how spread out the values are. 1,2,1,2,1 has a lower variance than 10,300000,10,-50000,100000000.) So if the difference between what you saw and expected to see is very low, and the variance is reasonably high, you can say the difference is statistically insignificant. This is another way of saying that the difference will lead to a p-value that is high, and therefore we keep the original null hypotheses that nothing special is going on.
Which brings us to Cecil’s statement finally. Since you didn’t link, I don’t know which column you mean, but we can figure out what he means. He knows what p-value is needed for us to accept the alternative hypothesis, that is say this phenomona is statistically significant. He knows the formula. He knows that plugging in 0.2 as the difference, divided by the sample variance, is going to result in a p-value much larger than the standard threshold value of 0.05. Therefore, he says the difference of 0.2 is statistically insignificant.
Note: There are many kinds of statistical tests. The math differs on all, but conceptually they’re all more or less like this.