IEEE 754 +0 single precision

This doesn’t make sense to me. How could +0 in IEEE 754 be all zeros? If you want to go ahead and give me a basic overview on how the conversion works in the first place that would be fine.

Here is what I think I know about single:

Normalize, show the sign (- or +), then add 127 to the exponent and show it in the next 8 bits, Then (and here is what confuses me) drop the implied one (where does this magic bit go for zeros?) and put the rest of the number followed by zeros.

Am I missing something? Know of a converter or code for a converter that I could use? It’s zeros and negative zeros that cofuse me.

Something has to represent zero. The format is:

[sign bit] [8 bits of exponent] [23 bits of mantissa]

When representing a binary value, you’ve already noted you normalize it. So:

11001011.110101

in binary-point would be normalized as:

1.1001011110101 * 2^7

by moving the decimal point until it is just after the first 1.

The leading 1 is dropped, since it must always be there for any value other than exactly zero, and so is implicit in any math. This value would be stored as:

0 10010111101010000000000 10000110

Note the excess 127 representation of the exponent, which is really 7, but represented as 134.

An all-zero bit pattern would have you believe the value really is:

1.0 * 2^-127

if you decoded it according to the above rules.

But something has to represent zero. So this extremely small value is chosen to represent zero instead, and any floating point math libraries or chips know this.

There is an alternate representation with the negative bit set, but it is never used as zero for two reasons (that I can think of):

  1. It’s stupid to have 2 representations of 0 because you have to have additional circuitry or code to handle it in test-for-zero. Granted, it means some additional work when doing the math itself, but there’s something objectionable about having two representations for the same value.

  2. with an all-zeros bit pattern, it is really easy to do a test against exactly zero because most processors already have circuitry for a 4 or 8 byte all-zeros bit test.

In any event, nobody’s much going to miss that incredibly small number that zero has to steal, and again, something has to represent zero.

Er, with regards to that dropped implied one, consider any binary-point number you can possibly imagine except zero.

Then, use an appropriate exponent to shift the binary point until there is only one 1 to the left of the decimal.

You’ll find that you can always do this (except for exactly zero). There’s always a 1 somewhere, and there’s always a leftmost 1. This means that there’s no point in actually storing that 1 in computer memory, is there? It’s a wasted bit. So instead we get an extra bit of precision at the other end. We just have to remember the existence of that implied 1 when decoding a float and doing math with it.

So the value of this randomly chosen pattern:

11001001011101000101101001101000

is:

-1.8761912201625818852335214614868 * 10^-7
because the sign is negative, the mantissa is:

1.10010010111010001011010

(note the implicit 1 added), and the binary exponent, after subtracting 127 is -23.

Negative zero is used. Certainly in Matlab, where I’ve encountered it, and also in any other language which supports IEEE 754.

From this site:

More generally, from that same site, look under the heading Special Values:

Alright, I concede defeat on those esoteric points. :slight_smile: I’ve personally never seen negative zero ever come up in my tests, but I suppose it depends on the math libraries you’re using. I wonder why I’ve never seen it.

My apologies for accidentally submitting personal experience for fact. I should have known better.

Of course, now I expect that if a math library is compliant, negative zero won’t give it any hiccups. :smiley:

Yeesh.

Neat site. I didn’t know what the formats were for the NANs and the INFs. Thanks.

First of all, thanks for your help.

So… I am afraid to ask… do you actually use this type of math in your occupation? If it is not too imposing, may I ask what you do for a living? Or do you just go around converting numbers for fun? :slight_smile:

The reason I ask is because I have this whole “impending career” thing… The real world calls.

Heh. Unless you’re:

a) having to write a math library at the assembler level, or
b) having to design a digital circuit to implement certain math operations, or
c) having to port between two incompatible floating point data formats,

you rarely have to deal with the actual bit pattern representation of a floating point number. You just use the math libraries/hardware. Both software and hardware solutions exist for most of this stuff already in virtually every conceivable development environment.

I know at least some (but obviously not all heh) about the representation out of mostly academic interest. A few times in my job I’ve had to look at a data stream byte by byte to make sure certain data came out right, some of which are floating point numbers, so I have had to decode them once in a blue moon. But it’s pretty rare.

However, my job (since you asked the question to the room) is chock full of the necessity to use floating point numbers even if I don’t need to decode them at the bit level by hand very often. I work for a company that produces high-precision Global Positioning System gear. Its kinda a neat job.

Messing with math, science, and related computing isn’t everybody’s baileywick, but for myself, I find it really cool when I can get a computer to do something nontrivial.

Anyway, I can’t imagine a career specifically geared around decoding floating point number bit patterns… Its a pretty small point to be gearing career decisions around. There’s an infinite variety of neat things to do. :slight_smile:

To expand on William_Ashbless’s answer just a bit, one of the points of using IEEE floating point is that you don’t have to worry about all these things. You’ll get sensible answers if possible, and if not you’ll get Infs or NaNs. Much better than getting a random bit pattern which might look like a valid number. Just be aware the NaNs and Infs exist, but don’t worry about them.

I work in electromagnetics, and use Matlab where NaNs come up often. Where I run into it most often, Matlab uses NaN to signal an out-of-range point in a 2-D interpolater, and I have to replace the NaN’s with the default value. Probably not what IEEE had in mind when they came up with NaN, but what else could they use?

Also keeps you from having to invent your own esoteric mechanism for keeping track of numeric precision unless you absolutely positively have to.

In rare cases I’ve had to change to fixed-point math or do some of my own clever mechanisms, since floating point math (in software anyway) can be computationally vicious…

If you’re a programmer, it’s good to have some idea of what’s actually going on at the machine level when you run your code. There’s no particular application of this stuff for the average programmer, but you never know when you’ll be called upon to be more than average.

Well, that’s true, and hence the reason I knew how to basically decode and encode such a number. :slight_smile:

Sad thing is, I’ve run into a fair few programmers who disagree, and couldn’t give a rats behind how this data format or that algorithm work – they’d rather just use it and go home. They just shrug when I point out that if something goes wrong they’ll have far less information on where to look by not knowing at least fundamentally how something works.

I dunno if my agreeing with you makes me a better than average programmer, but I’ll take my knowledge where I can get it in case it helps.