How do you round numbers?

I disagree. Suppose you measure something at 3.95. If you had one more digit of accuracy, you could have:

3.945
3.946
3.947
3.948
3.949
3.950
3.951
3.952
3.953
3.954
3.955

Five of those measurements are closer to 3.9, five are closer to 4.0. One is exactly halfway. They’re all equally likely , so if you always round to even (or always round to odd), you won’t have any bias in your rounding.

No. Here’s an example. Consider the complete range of different possible roundings:

3.140 -> 3.14
3.141 -> 3.14
3.142 -> 3.14
3.143 -> 3.14
3.144 -> 3.14
3.145 -> 3.15
3.146 -> 3.15
3.147 -> 3.15
3.148 -> 3.15
3.149 -> 3.15

Now calculate the error terms for each:

0.000
-0.001
-0.002
-0.003
-0.004
0.005
0.004
0.003
0.002
0.001

If we sum all these and divide by 10, we can determine the average amount of bias introduced per measurement. (Assuming all trailing digits have an equal chance of occurring.):

Average bias = 0.0005

If you always round 5’s up, you introduce a small upward bias to your data. On the other hand, if you round 5’s up half the time, and down half the time, the errors cancel out and the bias is eliminated.

Note that the ‘round to even’ is only done when it’s exactly 5, not when it’s even infinitesimally larger. E.g., rounding 10,625,000.00001 to four significant figures should round to 10,630,000.

I was always taught to round 5 down. Every American of my acquaintance rounds 5 up. I wonder if it is a US/UK/Other thing like billions and trillions.

(I am aware that there are more sophisticated and even better ways. I am talking about plain, vanilla, unqualified rounding).

I have made similar discoveries about such things as the definition of a line (versus a curve) and a surprising number of other ideas in maths.

This is fine for elementary school. When I taught college-level chemistry, though, I taught the round-to-even rule for fives.

Examples (if numbers are being rounded to three significant figures):
3.135 → 3.14
3.145 → 3.14

However, if the number being rounded is skewed slightly higher (or lower), normal rounding rules apply. More examples (if numbers are being rounded to three significant figures):
3.13500001 → 3.14
3.14500001 → 3.15

Wrong. All you are doing is consistently rounding down. This skews your data downward.

Again, wrong. Accuracy is unchanged. Rounding reduces the indicated precision of the data. There is a difference between these two terms (accuracy vs. precision).

Like the term “maths,” I would assume. :wink: This is not a word in American English.

We have the words “mathematics” and “math,” but not “maths.”

There are other places in the world where English is spoken, as I understand it. :wink:

I initially had something in my post about “maths” being a Britishism, but then realized I didn’t know if the term was specific to Britain, or if it was also common in other English-speaking countries, so I omitted the reference.

Unfortunately, this had the unintended effect of making it appear that I was unaware that this is a common term outside of the U.S.

ElvisL1ves said:

No. 0 does not get rounded. 0 is zero - it is already even. Rounding is limiting the number of digits that you are paying attention to, while accounting for the digits beyond. The digits beyond at 0 are zero, they do not need to be accounted for.

0 is even. Every other digit is some partial amount that must be accounted for. That leaves 1, 2, 3, and 4 going down, 5, 6, 7, 8, and 9 going up. That is 4 down, 5 up.

That’s why round to even (which I hadn’t heard of before) is better.

And yes, it only applies to exactly 5 in the last place.
**toadspittle ** said:

3.39
3.39
3.38
3.38

What’s the confusion? Round to even only applies to a 5 in the last place.

3.385 would be 3.38
3.395 would be 4.00
pan1 said:

No, truncating is less accurate. To take BobLibDem’s example, average the following:



meas   trunc    round
3.945 = 3.94    3.94
3.946 = 3.94    3.95
3.947 = 3.94    3.95
3.948 = 3.94    3.95
3.949 = 3.94    3.95
3.950 = 3.95    3.95
3.951 = 3.95    3.95
3.952 = 3.95    3.95
3.953 = 3.95    3.95
3.954 = 3.95    3.95
3.955 = 3.95    3.96
--------------------
Average
3.950 = 3.945   3.950


As you can see, truncating skews downward, rounding (to even) does not skew.

toadspittle said:

As ElvisL1ves said:

I would keep all numbers all the way through the process, and round once at the end to whatever sig digits for your use.

One more thing. Even if you only really believe it’s correct to two decimal digits of precision, you might want to leave a three digit number as-is. Suppose you get a reading of 9.95, so you figure that the actual value is in the range of 9.9 to 10. By the rules of rounding, above, you would round to 10. But by your own initial estimate, the actual value could very well be 9.9. You’ve just doubled the potential error in this value from 0.05 to 0.1.

One argument for rounding fives up (as opposed to rounding them down, at least) is that you don’t know what might have been done to the numbers before you got them. If someone else tells you that a measurement is 13.5 cm, it’s possible that they’ve already lost some digits. In particular, it’s possible that your source has (incorrectly) truncated instead of rounding, in which case rounding up will get you closer to the true value than rounding down will.

I’m not sure you get the point there. Half of all digits go to the ten on the low side, the other half go the ten on the high side. One of the ten digits doesn’t have far to go, though.

Hamster King, do schools no longer teach about significant figures? If you’re rounding off insignificant ones, then they’re noise. There is no “error” of the sort you’ve analyzed; the numbers you’re eliminating have to be considered random garbage. Rounding 5’s up eliminates long-term error in the way that random garbage is handled.

If the numbers you’re eliminating are significant, then keep them until the end and save any rounding for the final result. But then round a 5 up.

Wait, don’t you mean it reduces accuracy (correspondence with ‘truth’) but preserves precision (repeatability / lack of randomness)?

In only four of the five cases you are describing as ‘down’, does the value of the number change, but in all five of the ‘up’ cases, it changes - you are incorrect in saying the method eliminates bias.

No. Irishman’s right. in the case when there is exactly one terminal digit, always rounding 0.5 up will introduce a bias. E.g., when it’s the tenths digit, (0.0 - 0.1 - 0.2 - 0.3 - 0.4 + 0.5 + 0.4 + 0.3 + 0.2 + 0.1) / 10 = 0.05. By rounding 0.5 up half the time and down half the time you get (0.0 - 0.1 - 0.2 - 0.3 - 0.4 - 0.5 * 0.5 + 0.5 * 0.5 + 0.4 + 0.3 + 0.2 + 0.1) / 10 = 0.0, i.e., no bias, assuming that the times when you round 5 up versus when you round 5 down aren’t themselves biased to be in correlated parts of your calculations.

Now, extending that to more than 10 cases, to rounding with two digits to be removed, always rounding up 50 through 59, you get an average bias of (0.00 - 0.01 - 0.02 - … - 0.49 + 0.5 + 0.49 + 0.48 + … + 0.01) / 100 = 0.005. One-tenth as large, but still there. Now, if you round only 0.50 up half the time and down half the time you will again go back to having 0 bias.

If they’re truly “random garbage” then you might as well just truncate them. They don’t contain any information at all.

But with most instruments the final digit contains some information, even though it might not be exactly correct. For example, if I look at my digital thermometer and it says 98.1, I can reasonably assume that my actual body temperature is closer to 98 than 99, even though I know it’s probably not exactly 98.1. Similarly, if it says 98.9 I can assume the converse. The final digit is not “random garbage”.

If the thermometer reads 98.2 or 98.3 or 98.4 then, again, it’s more likely that my actual temperature is closer to 98 than 99. And if it reads 98.8, 98.7, or 98.6 then it’s more likely that my actual temperature is closer to 99 than 98.

But what to do about 98.5? It’s equidistant from 98 and 99! With the other digits I could have confidence that by rounding high or low I was minimizing my error. But with 98.5 my expected error is the same either way.

If you’re just taking one measurement, it doesn’t really matter. You can round up or down on 98.5 – they’re both equally good approximations of my actual temperature. But if you consistently round decimals that end in 5 in only one direction, and you take lots of measurements, and do something like add them all up and average them, then you’ll be introducing a slight bias into the data. You’re better off if you round “.5 cases” up half the time, and down half the time. And the “round .5 to even” rule is a decent way of approximating that.

(Although it’s not absolute. For example, if the next digit doesn’t have a 50/50 distribution between even and odd values, the “round .5 to even” rule might introduce a slightly different bias. You need to look closely at the expected values of the data you’re measuring.)

And yes, it’s better to save rounding to the end. However that’s not always possible. If you’re performing digital calculations you often have to round at intermediate stages because of precision limitations. Or data that was intended for one purpose may be reanalyzed later using different analytical tools.

No, if you round properly, the data should not be skewed upward or downward. This means that accuracy is preserved. However, when you round, you lose significant figures, which means that you lose precision.

When you buy an expensive measuring instrument, you are paying for precision. You lose this precision when you round.

For example, if I use a sophisticated mass balance, and measure a sample to have a mass of 3.8563 g, but then round the measurement to 3.86 g, I have lost precision. I could have just used a cheaper instrument to have gotten the same answer (i.e. 3.86 g).

Accuracy is completely separate, and is dependent on calibration of the instrument in question.

I don’t understand this “round to even” thing.

3.5 is as close to 4.0 as to 3.0 - are you saying that it should be to 4.0 because 4 is even?

Ever since it came up, I’ve been taught to round 5 up. In Spain, accountants (that includes banks) are required to round 5 up by law. And in complex calculations, the “ghost numbers” are dragged along, and the rounding takes place only once the calculations are finished.

Note that “precision” isn’t the same as “having a ton of figures.” If your machine has an error of ±0.1, giving two decimals doesn’t contain any more real information than giving one, and it’s misleading. There’s a difference between “we’re very precise” and “we’re very carefully tracking the noise.”

A good reason to round 5 up is that, if there are any more (nonzero) digits after the 5, it should definitely be rounded up: 36.503 is closer to 37 than it is to 36.

This is roughly the same reason noon is generally considered 12:00 p.m. Even though 12:00 exactly is neither ante- nor post-meridiem, 12:01 p.m. is.

Oh absolutely. If there are decimals after the 5, round up. That’s the rule I was taught.

But if its exactly 5, the choice is arbitrary. I proposed that it’s also cultural, Americans round up, Englishman round down.

Can any other Englishmen confirm? Or am I misremembering?