Why is "floating point" (a computing term) called floating point?

Floating point is defined as follows in the book “Python 3 for Absolute Beginners” by Tim Hall:

Why is “floating point” (a computing term) called floating point? The term seems to have no connection to any of the meanings of the word “float”. Please see dictionaries:

Oxford Dictionary Entry
Wiktionary

You may check other dictionaries if you would like to. I cannot relate the explanation in the Python book to any of the meaning in the dictionaries.

The point is not “fixed” - it “floats”. With “fixed” point representations you can have a much smaller range of values.

It basically means to “move about in a fluid manner”, without much resistance – and there are similar definitions for that in most dictionaries, including the two you listed.

Floating point basically uses scientific notation with a decimal point that can float around as necessary.

The Simple English version of the wikipedia article explains it reasonably well.

A fixed point representation has the binary point fixed to a specific location in a word for a particular calculation. In floating point, by specifying the exponent, you can put the binary point wherever you want it. The advantage of floating point is that you get more precision.

Von Neumann was famously opposed to floating point because if you didn’t know where the binary point was you shouldn’t be doing the calculation.

Thanks a lot for the post. In Wikipedia article which you linked to, it says:

1101 . 0111 equals to 13.7 (13 + 7/10), right? Did I get it correctly?

No, the first spot after the binary (decimal) point is 1/2, the next is 1/2^2 = 1/4, the next two are 1/8 and 1/16 so this is 13 + 1/4 + 1/8 + 1/16 =13.4375.

Or to state it another way, 13 + 7/16.

And the reason that you seldom see mention of fixed point numbers is because they’re really just integers with different units. For instance, one might have a number in a computer representing weight in pounds, measured to a precision of 1/16 pound… But that’s just the same thing as weight measured in integer ounces. And if there isn’t a pre-existing unit that corresponds to the precision you want, then you just invent one.

If I may offer an alternate explanation directly to the OP, a floating point number can have any number of digits before the decimal point and any number after, limited only by the precision available on the machine. It may be confusing to a beginner to say that the point floats, because for any given number it stays right where it is. But you can use the same floating-point variable to hold a number with 1 digit after the decimal point, or 9 digits after the decimal point, or none.

In contrast, a “fixed point” number always has the same number of digits after the decimal point. Currency in dollars for everyday retail transactions is an example of this, always using two digits after the decimal point (although things like gas prices and some financial transactions use three digits).

I’ve always found it helpful to think of the phrase “floating point” as being slightly a misnomer.

When writing numbers in standard scientific notation (e.g.: 4.3565486 * 10[sup]16[/sup]) the decimal point is, by convention, “fixed” in the position just to the right of the first digit. But you’ve moved the digits around in order to write it like that, and you wrote the exponent to tell how far you’ve moved the digits.

In other words, I think of this as actually keeping the decimal point in one place and letting the digits float left or right until just one digit is to the left of the point.

The principle is the same with “floating point” numbers stored in binary in a computer. The bits are shifted left or right until just one non-zero digit is to the left of the binary point, and the exponent field is adjusted accordingly. So again, the point stayed in one place and the surrounding digits moved.

There’s an additional quirk in the way binary floating point is stored in most modern computers: Since the left-most digit must be non-zero, and the only possible digits are 0 and 1, that digit must be a 1 and never anything else. Thus, it’s redundant to actually store that digit. The standard layout for floats in a modern computer, therefore, don’t actually include that digit; it’s just understood to be there. This allows for one additional bit of precision in the mantissa.

It might be better to say “fixed exponent” and “variable exponent”: Any number can be represented in scientific notation, with a mantissa multiplied by a radix raised to an exponent. For example, 1776 is 1.776*10^3, where 1.776 is the mantissa, 10 is the radix, and 3 is the exponent. I’ll continue to use 10 as the radix even though computers use 2, because it’s difficult for humans to come to terms with radical change.

Integers are the most familiar form of fixed exponent numbers: Their exponent is always zero, because anything raised to the zeroth power is one. (Which implies 0^0 is equal to… ? ;)) What are commonly called “fixed point” numbers are fixed exponent numbers with an exponent other than zero, usually less. They’re represented in hardware by allocating a fixed number of bits to the mantissa and nothing else.

Variable exponent numbers are what’s commonly called “floating point” numbers. They’re created by allocating some fixed number of bits to the mantissa, a fixed number of bits to the exponent, and usually a bit to represent the sign as well.

Both of these kinds of representation are considered special because we’ve been able to create hardware which works on those representations directly, or almost directly. Once you leave the realm of what mostly-hardware can do, you get to more interesting and useful representations which can’t always be understood in terms of having a fixed or variable exponent; true rational numbers are an example.

This is mostly true, but there is a number of representations called denormal numbers where this is not the case. Basically, when the exponent is pegged at its smallest possible value the standard gives a few more smaller (but lower precision) representations by letting the most significant digit be zero and the non-zero digits slip to the right.

The existence of these representations is important for some guarantees about underflow in addition and subtraction.

Just a quick semi-trivial practical note: the OP’s definition stated that floating point “… is a fundamental type used to define numbers with fractional parts”. This is a practical definition that applies to many (but not all) programming languages that have no such data type as “fixed point”. Basically in a plain old scientific programming language like FORTRAN, numeric data was either integer or it was floating point. There wasn’t anything else, unless the programmer wanted to assign his own convention and pretend there was a “virtual” decimal point wherever he wanted in certain integer variables.

However, this was not the case for COBOL, another old language that was a contemporary of FORTRAN and used for financial systems. No way that financial systems would subject large dollar amounts to the rounding vagaries of floating point; IIRC, COBOL basically had flexible data types and so you could define any sort of fixed-point variables you wanted. So in COBOL you could indeed have “numbers with fractional parts” that were not floating point, and the OP definition wouldn’t be correct.

Cobol also had (famously) BCD, binary-coded-decimal. A byte represented two DECIMAL digits, the first four bits were one digit, the second four a second digit. (So some values of bytes were not allowed in this format). The IBM 360, (and some other mainframes), had processor instructions that could do math between two arbitrary length BCD numbers, also taking the information on where the decimal point was in each number. This was essential for business math, as it eliminated not just floating point but the conversion between binary and decimal; the old joke that in some versions of FORTRAN 2x2=3.999999999

Nice note – I had forgotten all about BCD for years! :smiley: I never had anything to do with commercial programming languages like COBOL.

And let’s not even get started on the… unique… form of BCD used by HP calculators. The engineers at HP were very clever, but they seemed to have a sort of pathological need to continually re-invent the wheel.

I know nothing about the… unique… form of BCD used by HP calculators. But I’ve worked with the Univac SS-90 machine (an early solid-state computer from the early 1960’s or so, I think). This box had a main memory consisting of 5000 words of 10 decimal digits each, all on a rotating magnetic drum. Each decimal digit consisted of 4 bits. (Thus, this is about the equivalent of 25000 modern bytes.)

The encoding of each decimal digit was weird: It was a coding they called biquinary. The place-values of the four bits were 5, 4, 2, 1 rather than 8, 4, 2, 1. The coding rule was that the digits from 5 through 9 were encoded using the 5-bit. So 6 would be 1001 (5+1) rather than 0110 (4+2).

The six unused bit-patterns were called “un-digits”, and could easily be created and manipulated since the machine also had some bitwise boolean operations. Some of them were used for special purposes in the hardware logic too. For example, in multiplying two numbers, if one of the operands was a constant of less than 10 digits, one could put a particular un-digit in the position just left of the high-order non-zero digit, and that would stop the multiplication process at that position, thus making the multiply run faster.

ETA: The machine also had three registers, and these could also be addressed as if they were words in memory by putting “special” addresses into the address fields of instructions; the registers could even be the destination address of a BRANCH instruction (!). These special addresses also used the un-digits.

This is a good thread, if only because I have been around, …forever . Trying to implement division in BCD is a royal pain. Go ahead, try it. And I want to see 500 decimal places with no rounding error.

I timed out and wasn’t able to add, “You Pikers”. Go ahead, implement it.

My first quarter in college, I took an upper-division assembly language class (CDC 6400), having learned FORTRAN II earlier. Our first assignment was a warm-up review problem, to be written in FORTRAN: Compute, down to the units’ digits, the 999th and 1000th Fibonacci numbers, and the product of those.

No division, though.

I agree that “floating point” is a bit of a misnomer.

Fixed point is also a bit of a misnomer too. It’s appropriate for any variable that always has the same precision (such as inputs and outputs to a calculation) but intermediate variables or registers change frequently in the course of a calculation. (Of course, to a mathematician, that register or variable is a conceptually different thing at each stage in the calculation. As programmers, we get a bit obsessed with the cups, whereas the mathematician appropriately thinks only of the contents, as it changes from hot water to tea to sweetened tea. Now where’s the cream?)

I did a bit of fixed point coding in assembly, for auto engine control, on 8- and 16-bit computers. A serious PITA! But mostly, a bookkeeping nuisance with few conceptual surprises. The conceptual surprises come when you expect a compiler to do it. I was involved in a project where a new compiler handled fixed-point for you, something vaguely like COBOL or PL1, where you can define the variables and the compiler chooses the precision and range for intermediate results.

Well, it turns out you get to choose between nice tight semantics that produce truly bizarre-looking results from seemingly ordinary math, or looser semantics that usually produce “sensible” results but have bizarre corner cases. What we often don’t realize when we write a math expression is how much we might be demanding of intermediate storage for temporary results.

A similar kind of problem also happens using floating point, too. For example, in some digital audio applications, the mathematician can write out matrix equations that solve a problem (implement a filter, for example). The math can be relatively elegant and produce ideal results, in theory, but when implemented in a straightforward way, the results don’t match. The reason is that the analytic equations might rely on an intermediate term with two near-infinites or near-infinitesimals that divide to produce a number in the normal range. If you don’t have enough bits of precision, it goes south very quickly.

The good news is that floating point (especially double-precision, which is the default for things like C function parameters) pushes most of those cases into applications where fairly serious math is involved, and don’t usually bite us math-challenged folks in the ass too often. Scientists may not be so lucky.