Of course depending on your application, round to even could actually increase your bias.
Suppose you are interested in calculating how many hours people stay at work on average at your workplace. Many people clock their hours by coming in at 8:30 or leaving at 5:30. So using the round to even version to the nearest hour would round down such entry times, and round up such leaving times, resulting in a biased answer.
The fancy way to take care of this would be to round 5’s at random. Basically flip a coin to decide whether to round up or down.
There are ten possible options; 0123456789. For five of the 10 options (01234) we don’t change the preceding digit. For the other 5 (56789) we add one to the proceeding digit.
As explained above almost any time you have a 5 after the digit you are rounding to there are more non-zero digits after it. So is 4.500000001 closer to 4 or 5? 5 right? So we round up to 5.
But in elementary school, the teachers don’t teach THAT! Everything is boiled down to simplistic rules that teachers can parrot. So this entirely reasonable process has all of the concepts extracted out of it and is reduced to a husk of a rule to be memorized, “5 or higher” so when perfectly valid questions like, “What about 4.5?” gets raised the teacher responds, “Use the flippin’ rule you son of a bitch.”
Yes, but only in the case of 0 is the rounded number exactly the same as the unrounded one.
So, as septimus noted, if your rounding convention slightly decreases the value of the number when the final digit is {1,2,3,4}, slightly increases the value when the final digit is {5,6,7,8,9}, and leaves it unchanged when the final digit is zero, your rounded numbers will end up on average being slightly greater than their unrounded form. Because you’ve got 5/10 chance of an increase compared to only 4/10 chance of a decrease. (Assuming that your final digits are randomly distributed, which isn’t always the case.)
Path dependency: The standards for different processes came about differently. It doesn’t really matter which you use, but it really matters that you use the same thing as other people in your field. So you end up with different standards, neither of which is better than another.
Analogy: drive on left vs drive on right. Some countries went one way, and some went another. There’s no real benefit to one over another, but switching is a major hassle, and it’s very important that everyone around you does the same thing.
There are subtle differences to each, so which you choose requires weighing the pros and cons of each, and different applications went with different standards based on their needs.
Analogy: different screw heads. You can attach things with flathead or philips or torx, but each is slightly different in what it’s best suited for, so not all screws use the same heads.
With real world measurements there’s no such thing as a “final digit”. Numbers that aren’t measurements don’t need to be rounded, except for show (reporting on our $20 Trillion dollar debt instead of the more exact $20,454,070,952.73 debt, for example), in which case bias doesn’t matter.
This is how I was taught through all my classes in the 80s and early 90s. (And I had a math and science heavy curriculum up until my sophomore year of college.) “Banker’s rounding” and all the other assorted forms here I didn’t learn until probably well after college. Maybe even from this board.
More or less what I was going to say. “5 rounds up” does the right thing the majority of the time regardless of your rounding rules. It’s too hard to explain that 1.50000001 rounds up while 1.5 exactly may or may not depending on esoteric rules that aren’t even consistent. This is one of the least objectionable “lies to children” that comes up in math. They can learn the more advanced stuff when they start designing IEEE 754-compliant floating-point hardware units (and then there’s not even a 5 involved…).
I have a document that explains the different type of resize (rounding and saturation) functions we use in designing our digital signal processing hardware. It takes a 4 pages. The options are:
Truncation
Symmetric Truncation
Rounding (this is .5 rounds up) (really .1 rounds up because it is all binary)
Odd Dither Rounding
Symmetric Rounding
then there are the saturation options
Asymmetric saturation
Symmetric saturation
Then there is
Sign Extension
Last time this topic came up, I Googled and concluded that the Even Digit Convention might be an old, nearly forgotten idea. I learned it in the early 1960’s.
I’m curious: Are those of you who are familiar with this Rule (or Convention) old-timers like me?
That it is old and seldom taught does not make the Rule wrong, of course.
If we don’t want to waste students’ time learning a silly, seldom-applied rule of minimal value, then Yes! The Even-Digit Rule is the first thing to scratch from the curriculum!
But the beauty, and pedagogic purpose, of this Rule is NOT its actual utility. The Rule introduces the notion of bias, and presents a simple and elegant way to avoid that bias. It’s the thinking behind the Rule that might appeal to a teacher or student.
The reason I favor rounding 5s up is that you don’t always know where that number has been, and it’s possible that some idiot before you was truncating instead of rounding.
Example: I’m told that a certain number is 5.5, but I want to round it to an integer. Maybe the person who told me 5.5 was actually given a number to two places past the decimal, and converted it already himself. If he was rounding, too, then I still don’t know which way to go: Maybe he turned 5.48 into 5.5 (in which case I should round down), or maybe he turned 5.52 into 5.5 (in which case I should round up). But it’s also possible that he was truncating. In that case, the number he had might have been 5.51 or 5.53 or even 5.59, but it was not 5.49 or 5.57 or whatever, and so I should round up.
While that approach may have practical merit … it seems sad to base an arithmetic method on the inelegant assumption that some unnamed predecessor might have screwed up your inputs! :eek:
Not forgotten in the computer world. It’s the default rounding behavior for IEEE 754 floating point. I only know of the principle via that, though when I learned of the idea it was obvious that it would apply to other even bases.
Intel solved this problem years ago. Simply run the calculation on the right sort of Pentium processor, and you’ll find that 4.4999999999992304 rounds down to 4, while 4.500000000000239545 rounds up to 5. Even if you thought you entered 4.5!
Benford’s law affects all digits in the sorts of natural samples it applies to for the same reason. Just like there are more 1s than 9s out there in the real measurable world, there are more 1.1s than 1.9s, and more 2.01s than 2.09s, and so on.
And while rounding affects the least-significant digit, it references the second-least-significant. So if you round toward even, you’re going to end up rounding a lot more 1.5s up to 2 than 2.5s down to 2, and a lot more 3.5s up to 4 than 4.5s down to 4, and so on.
To clarify my previous statement, I didn’t mean that rounding to even introduces the same amount of bias that rounding 5s up does, but it will introduce bias.
Doesn’t a lot depend on just what it is we are rounding?
If it’s meds I’m giving a patient, 0.24 or 0.26 mg can be very different - to the point of life or death - than 0.2 (rounded down) or 0.3 (rounded up) mg.
If it’s keeping track of how much is in my cart at Costco so I don’t blow my budget, it’s both easier and safer to round up, no matter what the “cents” are, even if .49 or below.
I’d probably feel better if the pilot is rounding up on the fuel needed to fly the plane from A to B, but my horses’ fuel (hay) only needs to be approximate.
Seems to me that tolerances and thus rounding depends on how critical the application is and that no one rule fits all.
There is much in life that is serious. But sometimes we take life too seriously. (I just made this up.)