I’m watching an interview of Bill Gates and Steve Jobs together at some conference. Jobs is talking about the first Apple computer. And he says Wozniak writes a version of BASIC “thats perfect in every way except it’s fixed point, it’s not floating point”. What does this mean?
I googled floating point : “In computing, floating point describes a system for representing numbers that would be too large or too small to be represented as integers.”
That doesn’t clear up much. I understand what the definition means as far as “numbers that would be too large or too small to be represented as integers”, but not how that relates to writing code. (i know nothing about writing code)
Well, numbers in a computer are all represented as a sequences of ones and zeroes (because everything in a computer is represented like that). Now, you can make more complicated datastructures that have variable size, but for your basic datatype, that you would use to represent most numbers, it’s going to be of a fixed size.
For example, let’s say a number is represented by a sequence of 32 units that can have the value 0 or 1 (binary units, or bits).
Since each bit can have two different values, and there are 32 of them, that means there are 2[sup]32[/sup] (or 4 294 967 296) different combinations, which means that this datatype can only represent that many different values. This means two things: you have to pick minimum and maximum values that that type can represent, and you have to decide how to distribute the values in this interval between the two extreme values.
In a fixed point representation, the values are all spread out evenly. There are as many different values between 0 and 1 as there are between 1 000 000 and 1 000 001. The name comes from there being a fixed number of digits after the decimal point, which is the same for every number: if you can represent 1000.334 but not 1000.3344, then you can also represent 0.456 but not 0.7896.
In a floating point representation, the values are not evenly spread: they are more densely packed closer to zero. So you can represent 0.0003457678923523, but perhaps not 345675326856.34765. That way you can increase the precision where it matters more, at the cost of lower precision where it matters less. The name comes from the decimal point “floating around” among the digits.
Fixed point just means that the decimal point cannot move around. Technically an integer is fixed point.
Because of this there is a way to do decimal math with circuits that are only designed for integers.
You probably learned a little fixed point in school without realizing it. For instance, if you multiply a number with a single digit after the decimal against one with two places, your answer is going to have at most 3 decimal places.
It works on the whole number side of the decimal too.
So the logic is laid out and predictable. In fixed point calculations the decimal is moved so that they both match(or you just get rid of them), and you remember the total number of places. Do the math integer style and place a decimal point back in based on what you took out.
So its just a integer multiplication, a subtraction and an addition. Very fast for even weak computers.
Fixed versus Floating point (in coding terms) is both a performance and precision issue.
First, remember that native CPU math operations are in binary, not decimal notation. However, the principles are similar
Floating point numbers (9.999999999x10[sup]99[/sup]) can represent very large and very small numbers, but to increase precision requires additional storage and considerably more computing power. In the early days of computing (before CPUs had integrated floating point units) this was particularly costly.
Fixed point numbers (i.e 9999.99) have a fixed precision and storage size. A good example of fixed point math is many financial calculations - cents are the lowest resolution, so all calculations can be integer CPU operations with two digits allocated to cents. This is faster, but can be limiting when numbers get big or small.
I wrote a 3D graphics library in Pascal many years ago. I used fixed point formats so I didn’t have to do floating point math operations. In that case, I used binary, and had two binary ranges, 3.12 and 11.4 (signed). These gave the flexibility and numeric range, but I could shift range for additional precision when I needed to. I used fixed point lookup tables for trig operations (sin and cos), and sometimes had to use multipliers to get things right. But it was faster than the floating point math library by a long shot.
These days there are a range of floating point formats, and most consumer CPUs have an inbuilt FPU (Floating Point Unit). High performance graphics cards are a special case of parallel FPUs optimised for vector and matrix operations.
On preview, beaten by FuzzyOgre and Sofis - ah well
So what did Jobs mean by “thats perfect in every way except it’s fixed point, it’s not floating point”. ?
Was he saying that Wozniak could have written better code or was he making a point that processors at the time simply couldn’t do what he wanted them to?
It wasnt until the 386/486 era that desktop computers had dedicated floating point processors in them. Those are a lot faster. You can do floating point in a regular processor, but its much slower than integer math.
Basic at the time was slow to begin with, so it made sense that Wozniak would settle on fixed point.
The problem with fixed point is that it cannot be as accurate for small differences. You might be able to write 0.111, but not 0.1109 for instance.
Actually, Intel desktop computer dedicated floating point processors began in the 8086 era, and ended with the 486 era. The 8087 FPU was for the 8086 CPU, the 80287 was for the 80286 (I had one of these and it was a huge help), and the 80387 for the 80386.
Fixed point operations have the number represented, internally, as a simple decimal - 0.5000000 for instance.
Floating point units in cpus have the mantissa and exponent - so this would be represented as 1.0 x 2 ^-1. The exponent gets stored separately. The floating point unit processes arithmetic operations on registers with the mantissa/exponent format. Thus, floating point instructions are separate from fixed point ones.
While si_blakely is basically correct, most arithmetic would be done in double precision, and floating point requires the same amount of space. The mantissa is not a fraction, and so this is usually plenty of space. There is a loss of precision in both cases, of course.
BTW, von Neumann opposed floating point, since if you didn’t know where the binary point is you shouldn’t be doing the calculation.
On a computer, a standard number, and integer is represented in binary as 2 to the power of X, and a fixed point (signed) number is stored as an integer, with one bit used to determine if the number is positive or negative. There is also a hard coded scaling power which is a fixed value - either a base 10 or base 2 number.
eg.)
On a system with a scaling factor = 1000
-1000.1234
would be stored as
negative bit =1
number = 10001234
32bit using 2’s compliment
11111111 01100111 01100100 10101110
A floating point is stored as one integer containing the significant digits and one exponent from 10^X plus a bit the sign.
eg.)
-1000.1234 = -1 * 1.0001234 * 10^3
would be stored as
negative bit =1
signification digits = 10001234
e = 3
32 bit = 1 10001000 11110100000011111100110
Both floating point & fixed point have issues. Fixed point has precision issues where decimal values are rounded and overflow issues where a product of two numbers can be “out of range”. Floating point can handle larger values, and smaller values easier, but also has issues with rounding.
That depends on how you implement your fixed-point system. For instance, in a financial application, you might store all dollar amounts as an integer number of cents (or mills), and thereby be able to deal with hundredths (or thousandths) of a dollar without ever having to worry about rounding issues. Almost all of your numbers would have a nonterminating binary representation, but you’re not actually using a pure binary representation, so it wouldn’t matter.
Both fixed and floating point have precision issues. When I was in CS grad school we had an entire course on numerical methods where we learned how to determine the error bounds for different sorts of computations.
The BASIC interpreter that Wozniak wrote for the original Apple II was called Integer BASIC, and it did lack support for floating-point math. The only numbers it knew about where 16-bit integers. This had nothing to do with the 6502 CPU inside the Apple II, or with Wozniak’s programming abilities — but more with the rush on the Apple II’s development time and with the significant expense, in 1976-77, of enlarging the ROM to accommodate a more capable language.
Shortly thereafter though, Apple co-developed and licensed Applesoft BASIC from Microsoft, which did support floating-point math. (It also added commands for high-res graphics, which the hardware itself had always supported but Integer BASIC was ignorant of.) On the original Apple II, Applesoft had to be loaded and enabled in order to be available. On all the later models, starting in 1979, Applesoft was part of the on-board ROM.
So, you can always have floating-point math, if you’ve got the time and the memory for it anyway. It’s just a matter of writing the code to implement it, assuming the CPU doesn’t provide it. Having floating-point operations in the CPU is always faster, of course, if you have that luxury. And it was kind of a luxury, at least up until the 1990s.
About storing floating point values in base 2: the most significant bit is thrown away. This sounds wrong, but note that the most significant bit is always 1 (otherwise the exponent should have been increased by 1). Anything that uses the stored value knows to add a 1 bit on the “left” end.
So that explains the two different type of program disks we had in grade school. One type, you could copy any program on it to another disk and just use it, while the other needed you ha to boot with an original disk and then swap. The Applesoft BASIC must have been on those disks.
To test that hypothesis, do you remember if Applesoft programs were saved with a format of I, as opposed to the B(inary) or A(scii) modes of regular BASIC programs?
The Apple //c was my first computer, and I’ve always wondered about the idiosyncrasies I learned about it.
IIRC, the PC game Doom used fixed point arithmetic since the 486 processor that was common at the time had much better integer performance than floating point performance (fixed point math ends up reducing to integer CPU instructions). Later games like Quake used floating point math, which gave better rendering results but required a much higher horsepower floating point processor. Most (if not all) modern 3D games use floating point arithmetic in their rendering engine these days.
You can always take the tiny fractions of a cent left over from the division and put them into a special account. No one will ever miss them, and soon, profit!
By the way, have you seen my stapler? They switched from the Swingline to the Boston stapler, but I kept my Swingline stapler because it didn’t bind up as much, and I kept the staples for the Swingline stapler and it’s not okay because if they take my stapler then I’ll set the building on fire…
That certainly could be what you’re remembering. There’s another possibility though, just going by your brief description. (And, having no knowledge of when you were in grade school.)
The first Apple II disk drives, as well as the DOS operating system that went with them, used 13-sector tracks. Within a year though this was increased to 16 sectors, after improvements made both to DOS and the disk controller card. This made all the earlier 13-sector disks obsolete — but they could still be used, if a little inconveniently, on a 16-sector system. You had to either re-boot into a special 13-sector mode (which then closed you off to 16-sector disks, for the time being), or alternatively you could use a provided utility to copy files from the old disks to new ones.
So these 13/16-sector shenanigans could also be what you’re remembering. Especially since an Apple II with sufficient RAM can be loaded with whichever variety of BASIC it doesn’t hold natively in ROM, and then you can switch back and forth between the two at will, without any disk fiddling.
Appropriately enough, “FP” is the command for entering Applesoft BASIC when both BASICs are loaded up — “FP” meaning “floating-point”. And thus have I brought my otherwise blathering digression back to the thread topic, wrapped neatly in a bow.
Right, the one-letter file type codes you’re thinking of were: A for Applesoft BASIC programs (not A for ASCII), I for Integer BASIC, B for binary files, and T for text files. There were also types R and S defined, but hardly ever seen in the wild.
There were certainly plenty of idiosyncrasies in the Apple II line, with more and more accumulating as the series marched on.