A money program that doesn't use floats

So I’ve finally decided to check out Ruby and perhaps use Ruby on Rails for a project I’m working on, but while reading a guide to learning Ruby (that is aimed at the non-coder), the author states that:

:dubious:

Ummmmm. Now, I’ve never written a money program, though the project I’m working on will involve money. Does this idea seem silly to anyone else? Or would using 2 ints actually save more memory than a single float?

I think that the main reason behind this is because of the inherent problems in comparing floating point numbers. You really want your money manipulations to come out precisely right. Some fairly innocent decimal values turn into infinite repeating numbers when converted to hex (think of something like the decimal of 1/3=.33333… happening in hex), so there are many times when floaing point numbers will end up slightly out of sync even if they should be the same.

Of course, I tried to use two nearby calculators to quickly toss off a conversion of some simple decimal that gets ugly in hex (I think .2 does), only to find that both of them were too lame to actually show anything other than integers in hex.

Why two ints? You just store the amount in pennies ($2.78 is 278). It’s a lot faster and easier and more portable than using floats. Floats are bad for money since they can cause weird rounding problems and such, plus testing for equality is a major pita. Arriving at the same float through two different calculations might stand for the same number and give you different floats. Also, you’d be in big trouble if you ever had a 10 cent transaction snicker (0.1 is not representable in standard binary floating point numbers)

Ah! I found a cool calculator page that shows this:

IEEE-754 Floating-Point Conversion

I tried “.2” and got this: 3FC999999999999A
.3 became 3FD3333333333333

You can see that you are going to have to be dealing with “if(b-a < epsilon)” kind of stuff all over the place if you are working with these numbers.

Delphi and .NET both have scaled integer types intended for monetary calculations: Currency and Decimal, respectively. The purpose of those types is to keep precision, not to save memory or improve performance. While fixed point math can be faster than floating point, I doubt Decimal is any faster than Double, since it’s such an unusual format:

Floating point’s inability to store common decimal fractions exactly is part of the problem. Another part is the fact that floating point numbers only provide a fixed number of significant digits, which is independent of their range. If you’re dealing with just a few hundred bucks at a time, you can know exactly how much you have… but if you’re dealing with billions or trillions of dollars, you might only be able to store the amount to the closest thousand. With integers, you either have full precision or you have an overflow.

An int is usually 32 bits (might be 16 on some older compilers). A float is also 32 bits, but part of that is to store the exponent, so the actual number of bits to store your number is only 24. For money, you generally want everything accurate to the penny, so rounding errors due to floats would be a really bad thing. Stick to integers.

However, that said, you have to be very careful with integers in a computer. In a computer, if you start with the number 14, divide it by 40, multiply that result by 40, you’ll get 14 again, right? Wrong. You’ll mostly likely end up with zero.

Integers and floats are supported natively by the processor. Integer math, being simpler, is faster than float math. Decimal format is not natively supported by the processor, so it’s going to be slower. Fixed point math (if your compiler supports it) translates to integer instructions, so is pretty fast.

I hate to be a philistine, but for most programs involving money, the speed of the underlying CPU instructions, or the number of bytes per number are really irrelevant.

Compared to the size & speed of the rest of the program and it’s IO and UI requirements, the LAST thing a programmer should be worrying about are low-level issues like that. That WAS important stuff 40 years ago, but not today.

For example …

Doing 10,000 iterations of matirx multiplication on a 10,000 x 10,000 natrix, sure you’d better think about the speed of integer versus decimal versus floating point math.

Reading a few thousand financial records from a database on a locl disk or from a server across a LAN and then summing the account values for display via a http connection out to a browser or even directly on the workstation’s console? Pah. The difference between integer & FP math would be undetectable.
Important low-ish level issues are sufficient precision to avoid overflow/underflow, accuracy of calculation, including rounding and comparison. Higher level issues are compatiblity with your data store and any services or components you need interface with.

At work, we have an online expense reporting system. Like most things online, it kind of sucks because it’s taking what should be a simple thing to do with a GUI and putting it onto the web. So, I like to keep track of expenses using Excel, and then once in a while I’ll actually put everything into the online system and claim my money. Oh, yeah, I’m in a foreign country, so exchange rates come into play.

Now between Excel and the online system, there’s some floats that aren’t being worked properly, because every, single, bloody month I’m off by a penny or two either way. There’s no mistake; the amounts of perfectly in synch, and both sets of number are mirror images of each other. But for some reason, the server and Excel have a small margin of difference between them when it comes to floats.

I guess this doesn’t help you much, other than to illustrate that there are real world impacts of using floats. Since our exchange rates are down to hundreds of thousands of cents (really!), you’d need to have really long integers to represent them.