“Only the 24 most significant bits…” is what the above should say.
I just thought of something. Could it be that the reason for the OP is that Whack-a-Mole is trying to concoct some sort of Office Space scheme? ~mischievous grin~
I remember being taught about rounding in college (Yes, my memory is that good.) The “round to an even number” rule was called unbiased rounding. I never thought it had much of an application outside of math. It turns out, though, that machinists round to the even with thousandths of an inch.
Who’d a thunk it…
For the record, I distinctly remember being told a completely erroneous method at school; the maths teacher said we sould round one digit at a time, starting from the least significant, so rounding 1.234567 to 2 decimal places would go:
1.234567
1.23457
1.2346
1.235
1.24
Which is bad, really bad (because 4,567 is way below halfway to 10,000), but as I say, I’m certain I was taught it.
This is why the program I was working on a while back specified string based old style long division and multiplication, to some rediculous number of significant figures (> 100). Very slow compared to normal maths in a computer, but not as bad as I was expecting, and reasonably easy to use once we got the library of tools in place. And with hardware these days, very slow in comparison to normal is still pretty fast.
I’d still like to know what kind of calculations they were doing that required better than 100 digits precision. Oh, and getting back to the rounding, they then wanted to be able to round at any location, and specify either rounding method. (round up or round even).
DancingFool
This is an interesting thread. I can say that in all of my education and professional life (as an engineer) I have never heard of anybody advocating or using this “banker’s rounding”. I have heard of it, but only in passing. I can see it has some advantages, but I can’t believe it would make any difference in anything I’ve ever worked on. I mean, if your round off error is going to make or break something, then you’re not using enough significant figures.
Now, let me throw this out. What about rounding negative numbers? I never really thought about this before, but using the “traditional” method, I would round off a number like -1.25 to -1.3, which is really rounding down (depending on how you look at it). I think almost all the engineers I know would do it this way. Seems like this would remove the bias that some of you are claiming exists in this method.
There’s more than one “standard” for floating point numbers. For the typical 32 bit real (often called IEEE floating point) the number is going to be stored as:
0100 0000 0110 0000 0000 0000 0000 0000
which is 1 bit for the sign, 8 bits for the exponent, and 23 bits for the mantissa. I’m not sure what you meant above, but it’s stored as mantissa and exponent. It’s not directly stored as the number 3.5.
There is also a thing in computers called “fixed point” math. Most processors don’t natively support them, but basically the number 1 would be represented by 0000 0001 0000 0000 which is binary 1.0 (assuming 16 bits, point, 16 more bits, which I just chose arbitrarily). I haven’t been involved in any financial software, but I suspect that banks use a lot of fixed point math in their calculations. A lot of 3-D PC games used to use fixed point math also, for the simple reason that you can do fixed point math using the integer opcodes and if you did floating point math you had to use the floating point opcodes. Especially on early pentiums, the integer pipeline performance was significantly better than the floating point pipeline’s performance in the processor. You could easily do twice as many fixed point operations in same amount of time as a single floating point operation. These days the floating point performance of a cpu is pretty good, so I doubt too many 3-D engines use fixed point math in their calculations.
Not sure why you chose that particular number, but since it is a power of 2 you wouldn’t lose any bits at all due to rounding. And since you’d lose the last 2 bits (not 1) the computer can’t tell the difference between 33554432, 33554433, and 33554434. It stores all of them as the binary value:
0100 1100 0000 0000 0000 0000 0000 0000
which is:
0 - sign bit
1001 1000 - exponent (1001 1000 is the number 152, subtract 127 because it’s a "biased exponent) and you get 25
000 0000 0000 0000 0000 0000 - mantissa (1.0)
So basically it’s storing the number as 1.0*2^25
The above data format is only for IEEE floats. A PC can also do 64 bit floats and 80 bit floats (just has more bits assigned to the mantissa and exponent). Old vax computers had four different floating point formats, varying from 32 bits to 128 bits.
Yes, I meant IEEE floating-point. What I meant about 3.5 was that the 3.4999999 mentioned by Mangetout would get rounded to 3.5.
As far as choosing 2^25, I did so because I made a fencepost error. I should have used 2^24.
But even with 2^25, you would still accumulate larger error from not using banker rounding if you added random integers to the number. Instead of 1/2 of the numbers rounding up by 1, you would have 1/4 of the numbers round up by 1, 1/4 round up by 2, and 1/4 round down by 1.
With banker rounding, you would have 1/4 of numbers rounding up 1 and 1/4 of numbers rounding down 1 with 2^24. With 2^25, you’d have 1/4 of the numbers rounding up by 1, 1/4 of the numbers rounding down by 1, 1/8 of the numbers rounding up by 2, and 1/8 of the number rounding down by 2.
Here is a site with some information on IEEE floating point numbers. There are four IEEE rounding modes: round to nearest, round towards zero, round towards +infinity, and round towards -infinity. It also says (I’d never come across this before) that round towards nearest has three types: half integers can round towards zero, away from zero, or towards nearest even. Typically, the rounding mode of your code can be specified using compiler flags, usually with round towards nearest even the default
I suspect that the reason for “round half-integers away from zero” is that zero is special, since x*0 = 0. Consider the number 0.5, rounded to the nearest integer. By one argument, rounding up or down gives the same error, and absolute error of 0.5 . But you can also measure the error in relative terms: 1 is only twice as large as 0.5, but 0.5 is infinitely larger than 0.
The correct way to answer this is look at the average error. The best rounding method will produce an average error of 0.
Consider the simplest case where you’re only dropping one digit:
– Round Up On 5 –
0.0 --> 0 (error = 0.0)
0.1 --> 0 (error = -0.1)
0.2 --> 0 (error = -0.2)
0.3 --> 0 (error = -0.3)
0.4 --> 0 (error = -0.4)
0.5 --> 1 (error = 0.5)
0.6 --> 1 (error = 0.4)
0.7 --> 1 (error = 0.3)
0.8 --> 1 (error = 0.2)
0.9 --> 1 (error = 0.1)
Sum these and divide by ten and you get an average error of 0.05. Which introduces a bias in repeated calculations.
– Bankers Method (round .5 to nearest even) –
0.0 --> 0 (0.0)
0.1 --> 0 (-0.1)
0.2 --> 0 (-0.2)
0.3 --> 0 (-0.3)
0.4 --> 0 (-0.4)
0.5 --> 0 (-0.5)
0.6 --> 1 (0.4)
0.7 --> 1 (0.3)
0.8 --> 1 (0.2)
0.9 --> 1 (0.1)
1.0 --> 1 (0.0)
1.1 --> 1 (-0.1)
1.2 --> 1 (-0.2)
1.3 --> 1 (-0.3)
1.4 --> 1 (-0.4)
1.5 --> 2 (0.5)
1.6 --> 2 (0.4)
1.7 --> 2 (0.3)
1.8 --> 2 (0.2)
1.9 --> 2 (0.1)
Sum the errors and divide by 20 … 0.0 is the average error. The banker’s method is the superior rounding method.
If you increase the number of decimal points the bias of “round up on 5” is less, but never goes entirely away. Two decimal points and you’d have a bias of 0.005. Three decimals points and you’d have a bias of 0.0005.
This becomes particularly important when you’re doing iterative computations. (Solving a differential equation for example.) You can’t avoid some rounding at each stage of the game and if you use a method with a built in bias the accumulating error can skew your results.