Rules for rounding numbers

That’s only if your measurement is perfectly accurate to one more digit. If the next digit is not the final digit, rounding up on a 5 is fair. If following digits are considered, the table looks like this:



if the digit is        the rounding changes the value by:
--------------------------------------------
0                       .0  to -.099999... 
1                       -.1 to -.199999...
2                       -.2 to -.299999...
3                       -.3 to -.399999...
4                       -.4 to -.499999...
5                       +.4 to +.499999...
6                       +.3 to +.399999...
7                       +.2 to +.299999...
8                       +.1 to +.199999... 
9                       +0  to +.099999... 


So, when you round down, you decrease the value by 0 to .4999…; when you round up, you increase the value by 0 to .4999…

It’s not lopsided at all.

(I just hope we all agree that 0.4999… = 0.5)

As an aside, I fixed a computer program where the previous programmer rounded twice. So, 0.47 would be rounded to 0.5 and then up to 1.

Thats why you round after you have done the calculation - and then you supply an error range.

I suppose you mean the two values have been given to you by someone else who has done the rounding.

This reminds me of a question I have always wanted to ask … new thread.

If I were foolish enough to round before doing the math, I would round to 4 + 5 = 9. Where does the 4 + 6 come from?

The Grand old Duke of York
He had ten thousand men
He marched them up to the top of the hill
Then he marched them down again
And when they were up, they were up
And when they were down they were down
And when they were only halfway up
They were neither up nor down.

The Grand old Duke of York
He had five thousand men
He marched them halfway up the hill
Then he marched them halfway down again
And when they were halfway up, they were halfway up
And when they were halfway down they were halfway down
And when they were only a quarter of the way up
They were neither halfway up nor halfway down.

The Grand old Duke of York
He had two thousand five hundred men
He marched them a quarter of the way up the hill
Then he marched them a quarter of the way down again
And when they were a quarter of the way up, they were a quarter of the way up
And when they were a quarter of the way down they were a quarter of the way down
And when they were only an eighth of the way up
They were neither a quarter of the way up nor a quarter of the way down.

etc ad infinitum (infinitessum?)

I believe that the “always round up on .5” is called swedish rounding, it’s the only way I’ve ever heard of rounding.

I was taught early to “round to the even on .5”. In addition, part of my career was spent as a programmer in a financial institution, where a large part of the calculations involved adding a lot of numbers together. Rounding to even was the most accurate (even according the the government auditors).

This method seems intuitively obvious for money calculations to two digits (pennies in the US). It seems to me to be just as valid for whatever number of significant digits you need to round to.

I’ve never heard it called swedish rounding.

In my experience, “rounding” means adding half of the next digit then truncating, so that you get to the closest digit. If you don’t add in the .5 then it’s simply called “truncating,” not rounding.

In other words, 1.326 rounded to the nearest hundredth is 1.33.
1.326 truncated to the nearest hundredth is 1.32.

I was also taught that whether 1.325 rounds to 1.32 or 1.33 is a matter of convention, but rounding up is more common than rounding down.

Rounding/truncation is a huge deal in computers. Newbie programmers are sometimes very surprised when something like (43/12)12 = 36, not 43 (43/12 is 3.583333… which the computer truncates to 3, and 312 is 36).

Most computers simply truncate. If you want rounding you have to program it yourself.

I have heard this called “IEEE rounding” - Google doesn’t provide much help though, even with site:ieee.org

To be clear, the statement that computer programs truncate unless you explicitly tell them to round is correct, but at what digit it truncates depends upon the definition of the fields in the calculation. In the example above, the answer will only be an incorrect 36 if integer fields are being used. Floating point fields will provide a very accurate answer.

A mistake. You would only round before doing the math if you wanted to do a rough estimate. In almost all cases, banker’s rounding will get you a closer estimate than traditional rounding.

Floating point fields can spring surprises though because certain numbers cannot be precisely represented in floating point form - more details here:

[quote]
There are many situations in which precision, rounding, and accuracy in floating-point calculations can work to generate results that are surprising to the programmer. There are four general rules that should be followed:[ol]
[li]In a calculation involving both single and double precision, the result will not usually be any more accurate than single precision. If double precision is required, be certain all terms in the calculation, including constants, are specified in double precision. [/li][li]Never assume that a simple numeric value is accurately represented in the computer. Most floating-point values can’t be precisely represented as a finite binary value. For example .1 is .0001100110011… in binary (it repeats forever), so it can’t be represented with complete accuracy on a computer using binary arithmetic, which includes all PCs. [/li][li]Never assume that the result is accurate to the last decimal place. There are always small differences between the “true” answer and what can be calculated with the finite precision of any floating point processing unit. [/li][li]Never compare two floating-point values to see if they are equal or not- equal. This is a corollary to rule 3. There are almost always going to be small differences between numbers that “should” be equal. Instead, always check to see if the numbers are nearly equal. In other words, check to see if the difference between them is very small or insignificant.[/ol][/li][/quote]

  • ou might be thinking that this is completely irrelevant, since it typically only causes a very small error in the stored numeric value, whcih you might think would affect only the last decimal place, but consider that a number you are treating as 3.5 may actually be stored as 3.499999999 (and may appear in your form fields as 3.5, even without explicit formatting) - banker’s rounding should round it up to (even)4, but it will actually round down to 3

Can you explain this more? I don’t understand it. I thought each item the ‘.’ and the ‘1’ would have a binary representation of some combination of eight 0s and 1s. Where does the infinite binary value come from?

I can’t personally explain it, but I do know that floating point numbers are not as intuitive as I once thought (I used to imagine that they would just be scaled integers, though not recently) - perhaps someone else can give us a primer on floating point storage.

In binary, 1/10 has a non-terminating representation. Just like 1/7 in base 10.

Floating point arithmetic is a special case, and needs to be treated as such.

Perhaps I never should have introduced the concept of floating point fields in this thread? ~rueful smile~

My point was that the example given was misleading. On a computer, (43/12)*12 will not necessarily lead to an answer of 36.

Sheesh, I tried to make one lousy example and look what happens. Next time I’ll do a 3 page essay on how computers do math.

Just out of curiosity, I tried (43/12)*12 using Borland Turbo C++ for dos, and it gave me the answer 43.299999 using a float data type and 36 for an integer type.

I’ve heard of IEEE floats (which I have the spec for around here somewhere…) but I’ve never heard of IEEE rounding. If I get ambitious I’ll dig up the float spec and see what it says about rounding in it.

And since Mangetout asked, floating point numbers are stored in a variety of different ways in a computer. Typically, you have a value in binary which is 1.xxxx * 2^(yyyyyy). Quite often they drop the leading 1, which is always 1 unless the value is 0, and in this case they arbitrarily assign all zero bits as the number 0. This means that you only have to store the xxxx portion (the mantissa aka significand) and the yyyyy portion (the exponent).

The floating point value is assumed to be “normalized.” What this means is that for example in base 10, the number 10 can be represented as 1010^1 or 110^2, etc. The mantissa has to be shifted so that the first digit is a 1 (and the exponent adjusted accordingly) for the value to be normalized.

Erm… that should say “the number 100 can be represented…”

Actually, 3.5 is represented as 3.5 in floating-point. Floating-point format, in fact, is why using banker-rounding is very important. Quite often in floating-point calculations, the result has one more significant bit than what can be represented. So, the result must be rounded. But that bit can be either 0 or 1. So, 50% of the time, the result will not change, and 50% of the time, the result will be rounded up. Repeated calculations where the final bit must be rounded off will add a bias to the result unless banker’s rounding is used.

Take, for example, the value 33554432 (2^25). Only the 23 most-significant bits can be stored, so the “ones” column (basically, whether the number would be odd or even) can’t be stored. If you repeatedly added odd integers to your number using the round-up 0.5 method, you would be rounding up each time, creating a positive bias in your result. Using the banker-rounding method, you would round up 50% of the time, and round down 50% of the time.