spike404, you raise several interesting and important issues.
Regarding the difference between relative and absolute accuracy, suppose I have a thermometer, the kind where the scale is secured to the side with clips. Suppose I drop it, and I move that slide by some unknown amount. I don’t know where it was, so I can’t put it back.
For measuring what the temperature is outside, the thermometer is now useless. It has an unknown error, and thus has no absolute accuracy. I might be off by ten degrees.
For answering the question “how much did it warm from dawn until noon?”, however, it works perfectly. I can measure it to the nearest degree. That’s relative accuracy.
In general one can trust more digits for relative accuracy than for absolute accuracy. Note the qualifier at the start.
On the question of how many digits we can trust, I agree with you wholeheartedly that many more significant digits are expressed in many climate science statements than can be reasonably (or in some cases unreasonably) justified. I fall into the trap myself. Half a degree at the most in most cases.
Now, it’s true that the law of large numbers offers us help here. It won’t change the mean much if we add any symmetrically distributed error to each datapoint in a large dataset. And if we have a record of the temperature in 150 Januaries, we know more about say the linear trend than if we only have records for 110 Januarys.
But nature is not symmetrical about the mean. Nature is not linear. Nature never hear of the bell curve. Nature runs by power laws, lots of small things and few big things … but really big. Nature obeys what’s called the “Noah Effect”, where the biggest of something (Noah’s flood) is the same order of magnitude as the sum of all the floods the earth had known up to that time. Nature specializes in jumps and disconnects. Natural datasets show Hurst persistence at a variety of temporal and spatial scales. Nature runs right at the edge of turbulence, all the time, and the truth is there’s not much telling which way that frog is gonna jump …
The result is that in general, nature is more unpredictable than the raw statistics would indicate.
The result also is that measurements taken in a bell-curve model world tell us much more about the whole than measurements taken say in the atmosphere of the real world. Nature is blotchy and patchy, with sharply defined boundaries. I’ve seen tropical rain deluging on one side of a street with the other side bone dry. Fifty measurements in the real world of spots and patches, a piebald world where there is sunshine and shade, clouds and clear air, wet decades followed by decade long droughts, those measurements mean much, much less about the real world than fifty measurements taken in say the GCM model of that world.
So you are right in your question about the number of digits, often too many are carried, more than can be justified on a variety of grounds. I’ll speak in half degrees in the future … trouble is I can’t write numbers in half degrees, I can only write “1.5” … which implies tenths.
The problem is made more acute by a couple of things. One is an oddity, which is that climate science is unusual among the physical sciences because its subject matter is not a thing. Most sciences study things. Rocks. Fluids. Planetary systems. Machines. Oceans. The list of things studied by the physical sciences goes on and on.
But climate is not a thing. It is an average. It is defined as the average of weather over a long period of time, with a vaguely accepted minimum length of thirty years. The going gets muddy very fast when discussing averages as compared to discussing things. With things, you can examine and measure and test them, you can pick them up or stand on them or see them through telescopes.
With averages, on the other hand, all you can do is examine and measure and test the methods and data used to obtain the averages. You can’t pick them up, you can’t go outside and look at them.
This makes the size of the error bars on these averages and trends (and their deplorable lack in many climate science studies), central to the question. In many cases the size of the error bars is, in my view, greatly underestimated.
I usually give results (within reason) to a tenth of a degree. I know, I know, unwarranted, but here’s the second thing that make the problem more acute. The signal we are looking for is unbelievably tiny. We’re looking for a a couple hundredths of a degree per year. Say two tenths of a degree per decade. So it is difficult to talk about the signal we are searching for without using tenths of a degree.
In any case, point taken, spike404. Half degrees it is, less as appropriate. I need to review the rules for significant digits, which is actually more important than number of decimals.