Binary and Hexadecimal numbers question.

I want to see pi expressed in Mayan glyphs, or cuneiform.

You can’t use normal methods for representing fractions in systems that don’t use positional notation, so cuneiform is right out.

You could draw a circle and write ‘this ratio’…

You’ve just shown you can. You should instead ask “Why is this rarely seen?” to which the answer is “binary is most often seen representing values in bytes which always used to be integers” or something.

The idea crops up in mathematics occasionally, and even more exotic forms, where each digit is in a different base!

It gets worse. For suitable numbers a, b, and c, a + (b + c) != (a + b) + c.

Right, forgot about that. Those Sumerians didn’t make their clay tablets very Y2K compliant either, I bet.

Whoops, you’re right. My mistake :o

Back in the days of the 486, floating point operations took significantly longer to execute than integer instructions. These days the difference in speed isn’t quite as bad, but that’s not saying that floating point instructions execute as quickly as integer instructions. Integer instructions are still faster.

There is no “unpacking” required in floating point operations, just different circuitry required to execute the instructions. The co-processor works off of a stack model (you push data onto the floating point stack, perform the operation, and pull the data back off of the stack) which has a bit more overhead than an integer instruction. Pentiums are able to optimize integer instructions more than floating point just because integer operations are simpler.

Oh, I see.

I’ve seen code for doing floating point math and figured that the FPU operated similary. Partially because I’ve heard in the past that FPU commands are more like code than simple commands, and also because of the fact that they’re a little more complicated.

I mean, with integer 1 + integer 1 you just add the two togeather to get 2
But with float 1 - IEEE 00111111100000000000000000000000 you can’t go 1 + 1
Which would give you 01111111000000000000000000000000
instead of 2, which is 01000000000000000000000000000000

Anyway, I’m rambling, but thanks for clearing some of that up for me :cool:

Some folks have asserted that decimal numbers aren’t handled directly by CPUs. Not always true.

The Intel IA-32 (ie 386 through current Pentium) processors have a decimal format that stores numbers in RAM as decimal digits and the load/store instructions convert them to floating point for actual computation in the FPU.

Mainframes, in particular the long-serving IBM 360/370/390 architecture have instructions for performing native artihmetic on decimal-coded integers, in addiciton to the more common binary integer & binary floating point formats.

Finally, many data formats use fixed-point non-integers. That is to say the math is done using integer circuitry but the results are interpretted as numbers with a fixed number of bits/digits to the left and the right of the radix point. The common example is currency, where you can think of $1.25 as either one-and-one-quarter-dollar, or as 125 pennies. Do all the math in integer numbers of pennies and then shift the radix point 2 places for display.

That method gives you integer speed and avoidance of roundoff errors, while preserving the user expectation of 2 decimal places.

Binary Coded Decimal

Each decimal digit is represented by a 4-bit string, or nibble. The bit patterns that would normally represent the hex digits a-f are not considered valid in BCD. It’s not like normal binary, but it’s not exactly like decimal as a human would use it either. Plus it’s inefficient memorywise, because you usualy need more bits than regular binary to represent the same number.