If you are trying to round to the nearest whole number in “68.458335” then 8 is what math.com calls the ‘rounding digit’ and you round from the right of this number.
Where the goal is to ‘round to the nearest thousand.’ ([=] added)
So “68.458335” with the goal ‘round to the nearest whole number’ is 68 because you do not round up beyond the number directly to the right of the target round.
That rounding this number to the nearest tenth produces 68.5 is not of consequence if a whole number is to be derived from it directly.
To illustrate Indistinguishible’s point and using a process similar to your example, try this, for instance:
Add up and average all numbers from 10-29. What’s the average? 19.5
Add up and average all numbers in that range rounded using always round up method. 20
Add up and average all numbers in that range rounded using round-to-even method. 19.5
As you can see, the average up method introduces upward bias. The round-to-even method offsets this bias, assuming a fairly even distribution of numbers (for instance, if all your data points are in the range 20-29, and you’re rounding to nearest tens for whatever reason, your data will have a downward bias.)
Didn’t Julia Roberts once say to Richard Gere: “Well, sometimes there are expenses.” ? Anyway, don’t blame me. Of all the example numbers in all the gin joints in all the world, OP chose this one.
It depends to what sort of integer you’re rounding. If you’re rounding to the nearest integer, then as others have said, 68 is nearer than 69. But if you’re rounding to, say, the nearest multiple of three, then you’d round to 69.
So if I was to ask you to round Pi to three decimal places, what would you do?
By the way, even if your method was correct, your very last line is not. By saying 69.0 you are implying that the first decimal has been rounded to zero, the answer using your method is 69, not 69.0, there is a difference and it does matter.
I guess I don’t see why that wouldn’t naturally happen on it’s own. Maybe it’s just the small sample size?
ETA: I would also think that, if there’s going to be a bias correction, it should extend further than .50. Why wouldn’t the two numbers ending in .49 introduce the same level of bias? I mean, we’re back to 2 + 2 = 5.
IOW, has this bias actually been found in experimental form?
To answer your specific question, you would round to 68, since 68 is the nearest integer.
In general engineering practice, once keeps five significant digits during calculations, rounding to three significant digits for the final answer (unless the leading digit is a 1, in which case you keep to four sig digs). So if 68.458335 is an intermediate number, you would use 68.458 for your calculations; if 68.458335 is the final result of all your work, you would round it to 68.5.
The main points:
Round only once, at the end, when you have to show an answer that’s meaningful. Today’s calculators handly 8 or more digits. each rounding creates an error; this example is a perfect construction to show why multiple roundings can create bigger errors. 68.45 is obviously closer to 68 than 69. 68.5 is halfway, hence the 2 different rules what to do.
The answer should have the same accuracy (number of digits) as the inputs. If we start with “4 or 5 cups of salt in a couple of gallons” the saltiness of the resulting water solution cannot be expressed as 10.937% solution. (To pull total bogus number out of the air) using those inputs. Unless your input measurements and procedures are incredibly precise, the above poster’s rule - 3 or 4 digits accuracy max - is best. If your input numbers are less precise, your accuracy is as good as your worst input.
I remember once a giant pile of gravel; the bookkeeper had a fancy spreasheet that told him how many pounds in there, to the nearest 0.1 pound. The Engineer once laughed about it saying “the weightometer on the belt is only accurate to 5%”. Even though outgoing material went over a much more precise weigh scale, a truck could pick up enough mud or wet material to change the actual weight of product going out by quite a few pounds, depending on rain that day. Anything more than a 2-significant-digit number (or “estimate”) for the contents was meaningless.
Pick the digit you are rounding to. If the number to the right is 0-4, don’t change; 5-9, round up. Only round once, for the result.
If you take physics or chemistry, there’s a whole set of calculations to be done to take the inputs’ error/uncertainty and calculate a resulting uncertainty or error range for the result. That’s why most scientific calculations in experiments are expressed as “142.3+/-0.4 units”.
We just did this three months ago. Here’s a thread that discusses “round to even” in detail and why it’s better than “always round .5 up”.
The best explanation from that thread was this:
If you always round up on .5 you’re not actually rounding up half the time and rounding down half the time. 10% of the time you’re NOT ROUNDING AT ALL. So 50% of the time you’re rounding up, and 40% of the time you’re rounding down. That asymmetry introduces bias.
With the round to even method the breakdown is different:
10% of the time –> Not rounding at all
45% of the time –> Rounding up
45% of the time –> Rounding down
(This assume you’re only dropping one digit in the rounding. As you drop more digits the “not rounding at all” percentage gets smaller.)
That’s right.
See, this is why it’s better to use real scientific error estimates (“142.3+/-0.4 units”), like md2000 said. Nearly all weird rounding issues come from taking short cuts, like using significant figures.
Anyway, the reason to add another significant digit when the answer begins with “1” is that as a percentage, the error in 11200 is much greater than in 31200. Remember, 31200 means something like 11200 +/- 50, so the percent error is 50/31200 = 0.2% error. And in 11200 +/-50, the percentage error is 0.4%. If you’re in a situation where you need 0.3% accuracy, 31200 is good enough, but 11200 isn’t. So – again this is a rule of thumb because you’re using the shortcut of significant figures – you add another digit when the first digit is one.
The reason for the difference is that when the first digit is a 1, the fourth significant digit represents a larger percentage of the total, and so it’s worth keeping that fourth digit.
Reading from my old statics/dynamics text:
Note that if numbers starting with a “1” are rounded to three digits, their accuracy can end up being reduced to less than 0.2 percent, which is why you go with four digits in those cases.
Another vote for “don’t round at all.” Never round your intermediate calculations if you can help it, only your final result. I’ve seen students work math problems where they give the final answer as something like 14.732, even though they’ve rounded one or more of the numbers they used to get that answer to only one decimal place, which is ridiculus.
For example, suppose you want to plug your number into the formula 1,000,000/x.
If you use 68.458335, you’ll get a result of 14,607.4… (about 14,607).
If you use 68, you’ll get 14,705.8… (about 14,706, a difference of around 100).
(And if you use 69, you get about 14,493.)