I’m a science (specifically chemistry) educator, currently working on a chemistry textbook… and I ran into a question that I don’t know the answer to. How many sig figs are in 0.0 degrees C? I ran into it introducing while introducing Gay-Lussac’s Law and the Kelving temperature scale; before the students know Kelvin, P1/t1 + C = P2/t2 + C (in Celcius), but by the “normal” rules 0.0 degrees C would have 0 sig figs… but if you measure 0.0 deg C, that seems like 2 sig figs to me.
Any help that includes formal references (ACS or IUPAC come to mind) would be greatly appreciated. I don’t want to reinvent the wheel here, but I want to be consistant.
What if the temperature were 0.1 deg C? That’s clearly one significant figure. Ditto for any other temperature between -0.9 and 0.9 degrees. By interpolation it seems 0.0 deg should be treated as if it has 1 sig fig as well. I wonder if there’s a more formal way to rationalize that answer.
If there’s no multiplication involved, it would be enough to say “accuracy is +/- 0.05[sup]o[/sup]C” and avoid the messy issue of significant digits.
If there’s any multiplication involved, you’d use absolute temperatures. 0.0[sup]o[/sup]C is really 273.15 +/- 0.05 K, so it should be treated as 4 significant digits.
If 0.0 deg C is two significant figures, then converting to absolute temperature (the one that matters) would yield: 0.0 + 273.15 => 270 K. That’s not right.
The measurement is 0.0 deg C with an implicit +/-0.05 deg C. So in kelvins, we actually have 273.15 +/- 0.05 K => 273.2 K or four significant digits.
(On preview, in complete agreement with what scr4 said.)
Just wanted to drop in and say that no matter how many times I read the thread title, I always think it says “Six Flags in 0.0 deg C”, and I always wonder for a split second who in their right mind would go riding on roller coasters in such cold temperatures.
Thanks for the replies! I think I have to work a more accurate definition of sig figs into chapter 1 now. As far as I can see, 0.1 deg C should be thought of as having 2 significant digits in most case, in that 1.1 and -1.1 have 2 sig figs, and 0.1 is measured to just as much precision. In the particular case that I’m using (about 2 milliseconds before I introduce the Kelvin temperature scale), it won’t be clear to students why we break sig figs conventions in this one place to keep the extra digits unless I lay it out explicitly.
So… I’ll put sig figs in the “precision vs. accuracy” discussion, treat them right there (defining what they mean, not just the rules), and hopefully go from there
That’s flat out incorrect. Significant digits is a crude way of keeping track of error/uncertainty relative to the value itself. Two significant digits means error is on the order of 1%. Three digits means order of 0.1%.
But Celsius is an offset scale, so it makes absolutely no sense to express something as a fraction of temperature in Celsius. If the temperature is given as 2.0[sup]o[/sup]C, does that mean error is on the order of 1%? But what does “1% of 2.0[sup]o[/sup]C” mean?
Why can’t you reverse the teaching order? Surely you can’t justify teaching incorrect information just for your convenience?
This is one of the major problems I have about science education in general: teachers think it’s simpler to ignore “exceptions” like this, even though the over-simplified “laws” and “theories” they teach have many exceptions when applied to the real world. Students get the impression that science is just a made-up self-referential system.
0.1 deg C has 1 significant figure (sig fig), while 1.1 deg C has 2 sig figs. It doesn’t matter where the decimal place is, only how many numbers are to the right of the first nonzero number. The best way to visualize this is to convert both numbers to scientific notation. For example, 0.1 deg C is 1 x 10^-1 deg C, so it has only one sig fig, whereas 1.1 deg C is 1.1 x 10^0 deg C, so it has two sig figs. Similarly, 1100 deg C only has two sig figs (because it’s 1.1 x 10^3 deg C), whereas 1100.0 has 5 sig figs (because it’s 1.1000 x 10^3 deg C).
To answer your original question, 0.0 deg C has 1 sig fig because in scientific notation it is 0 x 10^0 deg C.
No, it does not. “Significant figures” is a concept only applicable to multiplication and division. And you can’t do either with temperature in Celsius.
Well OK, I suppose there are a few exceptions where you would do multiplication - e.g. when plotting the data on a graph at a certain scale. But that isn’t an actual science calculation.
But 0.0 has no first nonzero number, so there are zero digits to the right of it. Wouldn’t that mean it has no significant digits?
Another example would be a measuring device that had an accuracy of 1 mm: if you use it to measure the thickness of an ordinary piece of paper, you might get a reading of 0.000 metres. Again, there are no significant digits there, even though you have a known accuracy.
By “the first nonzero number” I was referring to the fact that zeros are often simply placeholders (for example, in 0.0001 or in 1000). The second zero in 0.0 is a significant figure because it’s not a placeholding zero, whereas the first is. Perhaps I should have phrased it “the first nonplaceholding digit” instead of “nonzero number,” but only matters in cases where the value is zero.
Unfortunately, you’re incorrect on the second part. If I had a measuring device with an accuracy of 1mm, then whatever value I measure is accurate to 1mm (provided I used the device correctly, of course). The fact that the value I obtained is 0mm doesn’t mean that that is exactly the correct value, it only means that the correct value is 0mm plus or minus 1mm.
You’re right. What I should say is “0.0 deg C is precise to 0.05 deg C”. Sig figs isn’t the right term to use here. Thanks.
First, I never said I intended to teach anything incorrect. What I was trying to do is to teach things absolutely correctly. In most textbooks, when they discuss sig figs, they setup hard and fast rules and leave it at that. I want to discuss what they mean, and why 0.0 deg C should not be thought of as the same thing as 0 deg C or 0.00 deg C.
I said I wanted to explicitly address this exception. How is that even similar to ignoring it?
Quick review of significant digits (same thing as significant figures):
0: 1 sig dig
1: 1 sig dig
0.0: 1 sig dig
0.00000000: 1 sig dig
0.00000001: 1 sig dig
1.00000000: 9 sig digs
2000: Between 1 and 4 sig digs (undetermined)
2000.: 4 sig digs
Rules:
You always have at least 1 sig dig.
Lead zeros do not count.
Trailing zeros after the decimal do count.
Trailing zeros before the decimal may count (usually scientific notation is used to show sig digs in this case).
A decimal after the number means all the previous zeros are significant (so long as there is at least one non zero digit before the decimal).
A bar can be added above the last significant digit if you don’t want to use scientific notation, but I have never seen this done.
OK, now to my own conjecture. I had always thought it was just as reasonable to express significant digits on Celcius as for Kelvin. You will need to convert to Kelvin for multiplication, but you simply follow the rules for addition to get there. They are: keep the worst degree of decimal places accuracy for addition and keep the same number of sig digs for multiplication.
One other significant figures note which I hope fits-in. When I was a student, I argued that formulas–such as for the circumference of a circle–had a built-in limiting accuracy because of significant digits, e.g. C = 2(pi)r is good to only one significant digit when used in the real world because of the 2. The teacher rightly corrected this notion, stating that in a mathematical formula such as this “2” was short for “2.0000000…”, and so had an infinite number of significant digits. He further stated this applied to theoretical physics/chemistry formulas: the “1/2” in the formula for kinetic energy “K = (1/2)mv^2” is actually “0.5000000…”.
I don’t like that system, it seems intuitively obvious to me that 2 measurements of the same value with different accuracies should have different sig figs.