Something has to represent zero. The format is:
[sign bit] [8 bits of exponent] [23 bits of mantissa]
When representing a binary value, you’ve already noted you normalize it. So:
11001011.110101
in binary-point would be normalized as:
1.1001011110101 * 2^7
by moving the decimal point until it is just after the first 1.
The leading 1 is dropped, since it must always be there for any value other than exactly zero, and so is implicit in any math. This value would be stored as:
0 10010111101010000000000 10000110
Note the excess 127 representation of the exponent, which is really 7, but represented as 134.
An all-zero bit pattern would have you believe the value really is:
1.0 * 2^-127
if you decoded it according to the above rules.
But something has to represent zero. So this extremely small value is chosen to represent zero instead, and any floating point math libraries or chips know this.
There is an alternate representation with the negative bit set, but it is never used as zero for two reasons (that I can think of):
-
It’s stupid to have 2 representations of 0 because you have to have additional circuitry or code to handle it in test-for-zero. Granted, it means some additional work when doing the math itself, but there’s something objectionable about having two representations for the same value.
-
with an all-zeros bit pattern, it is really easy to do a test against exactly zero because most processors already have circuitry for a 4 or 8 byte all-zeros bit test.
In any event, nobody’s much going to miss that incredibly small number that zero has to steal, and again, something has to represent zero.