There might be a more specialized place to discuss this but I know there are some folks around here who know this.
I was answering a question on an Excel discussion board about why the following expression
(4/20) > (1-0.8)
evaluates to True in VBA, when intuition says they should be equal. I gave an explanation about how not all decimal numbers can be represented exactly in a binary format. The result of 4/20, for example, is a repeating binary 0.0011… Because division and subtraction are different operations, it is possible that these two operations above will result in different binary representations, even though decimal arithmetic will give 0.2 for both. I have a computer science background and have at least two books on the shelf that include discussions of IEEE binary representation. I understand it fairly well but was never an expert and it’s been many years.
But the OP asked a follow-up question that stumped me. OK, so 0.2 is not represented exactly. But if I print 4/20 and 1-0.8, I see 0.2 and 0.2. So if there is representation error, why does the conversion to decimal not reflect the error?
I can think of one answer, but then it contradicts the comparison result. One possibility is that the “advertised” precision of a decimal floating point number is a little bit less precision than what is actually stored as binary. For example, storing 0.2 as 64-bit double precision might give a representational error of something like 2[sup]-52[/sup], which is about 2 x 10[sup]-16[/sup]. But maybe for conversion to decimal for printing, it rounds to the nearest 4 x 10[sup]-16[/sup]. So these two numbers are stored as different numbers, but displayed as the same number. But then why would it compare exact numbers as stored, instead of rounding using the decimal error first before making the comparison?