All my life, I was taught to round up if the next digit is a 5. But, lately, I’ve been hearing that, if it’s exactly 5 (e.g. 0.5000 not 0.5001), I should round to the closest even number instead, to avoid bias.
But I can’t see how this doesn’t make things worse. Here’s my logic: no matter which place you take it to, you will always have the same number of chunks for [-0.5,+0.5).
Let me show an example. We’ll just use 25. If I stick with integers, I have 15,16,17,18,19,20,21,22,23,24, which would normally be rounded to 20, and 25,26,27,28,29,30,31,32,33,34, which would normally be rounded to 30. That’s 10 each. If I use 15,15.1,15.2,15.3, etc, I wind up with 100 each. 15.00 gets 1000, and so on.
But, if you follow the “round 5 to evens” rule then you get 15,16,17,18,19,20,21,22,23,24,25 which round to 20 and 26,27,28,29,30,31,32,33,34 which round to 30. That’s 11 and 9. If I use the tenth’s place, I get 15,15.1,15.2,…24.9,25.0, which is 101 that round to 20, and 25.1,25.2,…34.8,34.9, which gives me 99.
So, for any fixed level of precision to start with, it would seem that this “round 5 to evens” rule would introduce rather than remove bias. Yet I’m told exactly the opposite.
Can anyone, using the level of math I am used if possible, show why the normal method of always rounding up on 5 is biased?
And, probably more importantly, can someone explain why my logic here is wrong?