Y’all are looking at it far too granularly. Let’s expand this to something else you’re used to - dollars and cents. Now, instead of 1, 1.1, 1.2, etc., you get 1000 numbers:
Sum all of those up, and you get 999.495; if you round them off, you get 999.50.
Now, let’s go another decimal place - 1.0001 etc.
Now, let’s go another decimal place - 1.00001 etc.
Now, let’s go another decimal place - 1.000001 etc.
This goes on forever. If you are looking at real numbers (theoretically), the chance of hitting EXACTLY 1.0 or EXACTLY 1.5 is infinitessimally small, so there is no “bias” in rounding .5 up to the next highest integer. It looks bad when you’re looking at 1 decimal point, but the universe of real numbers does not end at 1 decimal point.
OK, let’s look at reality. When you’re doing calculations on numbers, you quite often will get “normal” differences like 1.0 or 1.5. If you’re rounding to a whole number at that point, YOU HAD BETTER BE DEALING WITH MUCH LARGER NUMBERS - anyone dealing with actual dollars on a calculation isn’t going to be concerned about a difference between $1,000,001 and $1,000,002 because of rounding pennies. If you’re at the cash register at the local Kroger, you’re going to notice the difference between $1.00 and $2.00.
The problem of “rounding bias” comes when you are rounding to too few decimal places. I don’t know how often I’ve seen spreadsheets (or programs!) from people where they round intermediate results to the penny. Let’s say you are calculating future manufacturing costs. Right now, you can create 100 widgets at a cost of $149.50, and you’re thinking of franchising out to 10,000 people. Your cost per widget is 1.495. If you do something extremely silly and round that off to the nearest dollar, you’re going to estimate that your new manufacturing costs will be $1 * 100 * 10,000 = 1,000,000, when your real costs should be $1,495,000.