If I have a value of 15.97, which I want gto round to a precision of one position after decimal point, will it become 15.9 or 16.0 (because the .97 gets rounded up to the next value, which happens to be .0 and makes the entire number go up to 16.0)?
I guess 16.0 is what I get following strict rounding rules, but it looks strange to me because the original number is definitely below 16 and gets the look of 16 because of double rounding. Which is accurate?
Yes, you would round to 16.0, because that’s the closest number with one digit after the decimal point. Similarly, 9.7 would be rounded to 10 if you round to the nearest integer.
You are correct that it should be 16.0. You round up if the digit immediately to the right of where you are truncating is greater than 5 and let it be if the digit is less than 5. If it IS 5 and followed by zero, you round up if the digit to the left of the truncation point is odd, otherwise leave it alone. This last is to compensate for the bias which creeps in from always rounding up on 5.
It’s worth pointing out that the .0 is important - although I think you know that. 16.0 means you’ve rounded to the nearest tenth - simply putting 16 implies that you’ve rounded to the nearest integer, which you haven’t.
It’s common in modern computers that use IEEE floating point arithmetic. It’s called round to nearest or even. A programming language may require other behavior.
It’s not all that new. I’m not sure how long it’s been around, but I do know that NIST (IIRC) requires that digital scales used to weigh consumer goods, among other applications which require rounding, must use this rule.
Note that these are mostly conventions, not rules. There is no person or body that requires these roundings in most everyday applications, merely conventional practice and logic. There are particular, if rare, applications that always drop to the lower number rather than round up.
It’s the same things as styles in language as opposed to rules of grammar.
I remember a weird thing we were supposed to do in Chemistry class. Now, the class was taught by a total idiot who was stuck in the Stone Age, and while he may have been hired to teach Alchemy back in the day, he was kept on as the Chemistry teacher for his ability to coach baseball. Anyway:
His rule was that if you were rounding, for example, 7.53 to the nearest integer, then you would look to the .03, and round down if the digit was odd, and up if it was even. So a 7.53 – even though 8 is clearly closer than 7 – would be rounded down to 7. 7.63 would be rounded up because of the “6”, and 7.43 would be rounded down because of the “4”.
He claimed that this kept “random” data from accruing a rounding bias, but it seems to me he was full of crap. Anyone heard of this? Am I misconstruing his instructions?
I was taught the same, but I think it only applies when rounding a lot of numbers at a time. If you always round 5s up, it does tend to skew the totals up a notch. So instead, sometimes round them up, and sometimes down.
But assuming a random distribution of numbers, wouldn’t you expect half of them to end in 0,1,2,3, and 4 and the other half to end in 5,6,7,8, and 9? That is: of the ten digits we use, if you’re just looking at one digit, and assuming its value is random, shouldn’t cutting the set precisely in half remove the bias to begin with?
Yes, but look at how much you’re rounding off in those two groups. When you round down 0,1,2,3,4, you’re rounding off an average of 2. When you round up 5,6,7,8,9, you’re rounding up an average of 3.
Eh… I would quote the relevant rules about rounding a mark in sailboat racing, but I’m too lazy to look it up. Something about overlaps and hailing for “room at the mark!”
This makes sense to me. Perhaps this was the approach he was recommending? It is possible – however unlikely – that my chemistry teacher was not a total idiot. And that’s as close to an apology as he’s ever going to get.