Who said there isn’t? AFAIK, the only practical use for binary is that computers use it, and the only practical use for hex is that it’s easier for humans to read than binary. And there’s no reason to use “decimal” points in computing. But there’s no reason you couldn’t do it anyway, if you wanted to. Once I figured out a bunch of fractions in binary, using long division, just for fun. I think 1/10 (that’s 1/3 in decimal) was 0.0101010101…
Ah, but there are binary and hexadecimal points, and they work just as you’ve described.
Sure there is! Well, except for the fact that computers deal with “binary” points, and not decimal points, for internal computation (and the fact that the binary point is implicit rather than explicit). Anyway, every single 3D computer game you’ve ever played uses floating point math.
If we were talking about pure math and number bases, then yeah, the “decimal point” notation works just fine for these number bases too. But when dealing with binary and hex, we usually bias our arithmetic towards how a computer works. Adding and subtracting binary and hex is done in 2’s-complement arithmetic inside the computer (i.e. -1 = FFFF (hex)). Multiplication takes a couple steps (unless it’s done by a factor of 2, then it’s a simple shift-left).
Division is a sore spot with computers. In C language if you say 12/7 you get 1 (as long as everything is defined as integers). If you use floating point numbers, then you will get a more precise number, but it is subject to rounding errors. In fact most rounding algorithms are so screwy that you almost never use floating point numbers to represent amounts money.
I would imagine by this day and age there are conversion functions out there somewhere to give exactly what you described in the OP.
For the record, the generic term for the dot is radix point. Binary, hexadecimal, and decimal all have radix points. The one for binary is more specifically called a binary point, etc.
Not if you’ve played older games. Games from the Doom era often used “fixed point” math instead of floating point math. Fixed point uses some number of bits, then the point, then some other number of bits, similar to writing 423.211 in decimal.
Fixed point math isn’t handled directly by the CPU, but can easily be translated into a short number of integer instructions which are handled natively by the CPU. Since integer instructions executed much more quickly on a 486 era processor than floating point operations, it made a lot of sense to use fixed point math for better performance.
Financial programs will sometimes use binary coded decimal numbers (BCD) which can also have decimal points. These also are not handled natively by the processor, so must go through a conversion into integer operations by the compiler.
Ah yes, the good old days when having a math coprocessor was required to do floating point calculations, which made using fixed point much more attractive.
Computers use a sort of scientific notation for floating point numbers, as specified by the Institute of Electrical and Electronics Engineers (these guys specify all kinds of standards for computers, which is part of the reason you can view this page with different OSes and browsers). A floating point number has one bit for the sign, a few bits for the exponent, and the rest for the fraction. The actual bit pattern is a bit funky due to the way computer math is performed, but it’s pretty straighforward. Linkety
To be perfectly clear, numbers work the same way in any base.
11.11 in binary is
2[sup]1[/sup]+2[sup]0[/sup]+2[sup]-1[/sup]+2[sup]-2[/sup] = 2+1+.5+.25=3.75 in decimal.
11.11 in hex is
16[sup]1[/sup]+16[sup]0[/sup]+16[sup]-1[/sup]+16[sup]-2[/sup] = 17.06640625.
11.11 in decimal is
10[sup]1[/sup]+10[sup]0[/sup]+10[sup]-1[/sup]+10[sup]-2[/sup] which, of course, is 11.11.
If you have a Unix system, and access to the bc utility, you can crank out binary and hexadecimal fractions until the cows come home. Here are the first several unit fractions expressed in hex, with parentheses showing digit patterns that endlessly repeat:
I program as a hobby, and am very familiar with hex and binary numbers, but I’ve never seen binary or hexadecimal point numbers. Now, with computer registers, you can’t have fractions, only whole numbers, but even with the FPU, it’s only in decimal point notation. I’ve never seen a program that uses fractions for anything other than decimal points. Even computer calculators, like the one built into Windows, will turn 1.5 DEC into 1 hex. That’s what got me curious as to if they even existed or not.
Are you saying that FPU calculations are just as fast as CPU integer calculations? I still thought that, even to this day, integer calculations were faster because FPU commands take a few lines of code to execute even a simple command. I mean, it has to unpack the exponent and mantissas and then… Anyway, I still assume that integer calculations are much quicker.
Just because the Windows calculator can’t display floating point numbers in binary or hex, this does not mean that your computer can’t handle such numbers. On the contrary, all floating point numbers are converted to binary, just like any other number. The computer cannot work directly with decimal, it just converts back to decimal for the end result. Here’s an example:
Say you want to add 1/3 and 1/6. We know the real answer is 1/2, and it’s possible to write a program to calculate fractions the way a human would (I can post the source code for one I wrote if anyone’s interested), but normally you would convert to floating point and do the calculation that way. Using Bytegeist’s handy table, we have the following:
As I said before, IEEE specifies a standard for floating point numbers (well two actually - 32 and 64 bit) which is a form of scientific notation. Here’s the link again. So now we have this:
To store these values in 32 bit format, you set the the sign bit to 0 for positive, add 127 to the exponent, and use the rest of the bits for the fractional part of the mantissa (the leading 1 is implicit). The stored numbers look like this:
Note that the last digit should be 0 on both numbers, but they are rounded because the next digit would be a 1 if it was shown. To add them, the exponents have to be the same, so the smaller number is adjusted to match the larger and we have this:
1/3 = 1.01010101010101010101011 * 2[sup]-2[/sup]
1/6 = 0.10101010101010101010101 * 2[sup]-2[/sup] (note that the least significant bit is lost)
The classic MacOS defined and supported at least two types of fixed-point binary numbers. Four actually, if you were to count the signed and unsigned variants separately. There was a “16.16” format (a 16-bit integer with a 16-bit fraction), and a “2.30” format (2-bit integer, 30-bit fraction).
The first format was used mostly to represent non-integral pixel coordinates. Normally, coordinates in the Mac’s drawing space were signed 16-bit integers. But if you were rendering text with fractional widths — meaning that the character positions might require sub-pixel resolution — then you would use “16.16” coordinates and the OS functions that supported them.
The second format supported numbers ranging from -2 to +2 (almost), with a unit resolution of about one one-billionth. Or 2[sup]-30[/sup] to be more precise. This format represented angles as portions of a circle, as well as the sines and cosines of those angles. The OS provided functions to compute the core trig functions in this format, and they were implemented using integer arithmetic entirely. However, I don’t think that this format, or the routines supporting it, got much use after Macs with built-in FPUs came out. The routines were quick enough for what they were, but still slower than the FPU’s own trig instructions.
Binary fixed-point numbers are a venerable solution to many numeric problems and are by no means obsolete, even with FPUs being so ubiquitous in today’s machines.
As Sturmhauke and others have mentioned, FPU registers are implemented in binary, not decimal. This is what gives the “flaky” numeric behavior that surprises so many beginning programmers — that 5 divided by 10 is not necessarily 0.5, for example, but might be 0.499999997 instead.
The Windows’ calculator is apparently limiting hex/binary numbers to integers only, and they’re not alone in that. (The MacOS X “freebie” calculator also has the same limitation.) Many calculator applications, if they provide a hex mode at all, do so just to assist programmers. And programmers are normally using hex to examine bit vectors, memory addresses, pixel states, and the like — values that are best interpreted as integers or bit sequences. This doesn’t mean though that hex and binary fractions, as well as fractions of other bases, aren’t perfectly well defined.
Just for a lark, here’s the approximate value of pi expressed in several bases, including hexadecimal and binary. Once again, I’m using the Unix Bc program to do this.
Bytegeist, thanks for your pi exercise. I’ve calc’d in binary before, but it is strangely satisfying to see it in other bases. (A Geek knows pi to excessive decimal places. An Ubergeek knows it to excessive decimal places in binary.) Now the challenge is to memorize a enough of it in hexadecimal.
What can I say – I like pi(e)! Pi(e), pi(e), pi(e)!