Decimal #s can have a decimal place, so can binary #s have a binary place? Such as:

0.1, 1.1, 10.1, 11.1

I think this can be. In fact I want to say that I am sure. Several HS teachers and other smart contacts since have all said NO.

Decimal #s can have a decimal place, so can binary #s have a binary place? Such as:

0.1, 1.1, 10.1, 11.1

I think this can be. In fact I want to say that I am sure. Several HS teachers and other smart contacts since have all said NO.

Of course they can. 0.001, for example, equates to 1 x 2[sup]-3[/sup] = 1/2[sup]3[/sup] = 1/8

Sure binary numbers can have a binary point. 0.1 would be 1 * 2[sup]-1[/sup] or 1/2.

0.01 would be 0*2[sup]0[/sup] + 0*2[sup]-1[/sup] + 1*2[sup]-2[/sup] or 1/4.

1/5 = .00110011… (I think)

and .111111111111111111111111111… = 1

In fact each next place to the right is 1/2 the previous place, just as in decimal, the places are 1/10 the one to the left. In decimal, .125 evaluates as 1 * 1/10 + 2 * 1/100 + 5 * 1/1000. For binary, .1101 would evaluate as 1 * 1/2 + 1 * 1/4 + 0 * 1/8 + 1 * 1/16.

You are certainly correct. Given the reservation you express, I’m not actually convinced that you’re sure.

There are two types of binary data values commonly used in computers, especially in 3-D games. The first is called floating point, which is the binary equivalent of something like 3.2x10 to the 3rd power.

Here’s a site that explains the more common type of float used in a computer:

http://www.psc.edu/general/software/packages/ieee/ieee.html

There’s another type called fixed point. Fixed point arithmetic is commonly used in older 3-D games because the floating point performance of the processors at the time wasn’t so great, and integers didn’t have enough precision. Fixed point arithmetic is generally not supported natively by the CPU, unlike integer and floating point. However, fixed point math basically boils down to integer operations, which then get processed by the integer portion of the CPU. Integer math is much simpler than floating point math, so the integer performance of a cpu is usually much better than the floating point performance.

A fixed point number in binary is the equivalent of having a decimal with a fixed point, like for example if you had 3 digits before the point and 3 digits after. In this example, the number 3 would be 003.000 (stored as a 3 and a 0), and the number 1/4 would be 000.250 (stored as 0 and 250 by the computer).

This site has more details:

http://www.embedded.com/98/9804fe2.htm

This site relates fixed point arithmetic to 3-d graphics. It seems to think a 486 is a relatively fast processor, which kind of indicates how old the article is.

http://www.gameprogrammer.com/4-fixed.html

The only difference between the computer versions of floating point and fixed point arithmetic and the mathematical paper and pencil type equivalents is that the computer just truncates and loses stuff when it runs out of places to hold the bits.

This is one of those lovely times when you get to tell your HS teachers they are wrong.

Well, u were right. After all those people who I have talked with over the years all saying NO. But now I have the Straight Dope.

In base r, the point that separates the negative and non-negative powers is referred to as the radix point.

And for the uber-geek :D, pi in binary is (approximately)

11.0010010000111111011010101.

(Binary has decimal points for as long as you can divide by two.)