I’m sure some hapless soul wasn’t forced to input every mathmatical calculation for every function a calculator could perform. How does my calculator know that 29 divided by 7 is 4.142897 without even thinking?
It does think. Just really really really fast. You know how to do long division? Well essentially a calculator has a bunch of steps it follows to divide 29 by 7, probably fairly similar to long division.
There’s standard algorithms for virtually every mathematical operation. Where these algorithms aren’t appropriate (due to complexity etc.) interpolation is used based off a lookup table.
This is the way calculators work.
Were you talking about a program you’re writing on a computer, someone might choose to calculate each math function or use a table.
I think I had to build one of these when I was in college. I just remember using a shit load of gates strung together. (binary type of thing)
Sorry if I’ve nothing to directly add to the OP… but did anyone else who viewed the post jump over to howstuffworks.com to post a link to their article? Did I spell calculator wrong? Weird. I think this was the first time they let me down. Great site anyway, and useful for just about every other How Stuff Works question.
Note, the below explenation is going to be overly simplified, and just show how one could build a calculator if you wish. Most calculators will be significantly more complex, but the basic premise shoudl be the same.
All calculators are based on a digital microchip inside. This chip is often referred to as a microprocessor. As previously quoted earlier, every single mathematical function can be expressed in an algorition, and commonly each algorithm us based on some simple iterative binary process. Usually this iterative binary process is addition. For example multiplication could be expressed as adding the same factor over and over again… to get 4 times 5, one could do 4 + 4 =12 + 4 = 16 + 4 = 20 + 4 == 24. To do subtraction, you can simply add the negative… 10 minus 4 = 10 + (-4) = 6.
So you’re probably going to ask, how can the calculator do addition then? I’m sure you’ve heard of computers using something binary. Binary is another numbering system. Usually we use decimal, witch is based off the number 10. Every number in decimal form can be expressed as as factor of ten… for instance 453 = 410^2 + 510^1 + 310^0. Binary is simply the same thing except expressed in factors of 2… for instance 1011 = 12^3 + 010^2 + 110^1 + 0^10^0. Lets call the each digit in a binary number a bit. All calcuators take a decimal number you input, convert it to binary, perform the requested operation on it in binary, converts it back to decimal, and displays it for you.
So I haven’t explained how the operations actually happen, just that they happen in binary. We are able to take the binary bit (a 1 or a 0) compare them to eachother through pieces of hardware called Gates. Gates are made up of transistors, and unless you want to me to go into those, assume that we just have these at our disposal. The most common gates usually perform the AND, OR, & NOT function… For explenation purposes, think of a 1 being a TRUE value, and a 0 being a FALSE value. An AND gate will take two values, and if the first value is true AND the second value is true, the output will be TRUE, otherwise FALSE. An OR gate will take two values and if the first value is true OR the second value is true, the output will be TRUE. the NOT gate simply takes a value and returns the opposite, TRUE returns FALSE, FALSE returns TRUE. Through these simple gates, one can do addition. If you didn’t follow that, please refer to this web page for a better explenation of Gates:
http://www.play-hookey.com/digital/basic_gates.html
So, using these gates, we’re able to tie them together in a creative fashion that will impliment the addition. The following web page explains it much better than I could, so please check them out:
http://www.play-hookey.com/digital/adder.html
So if you followed that at all, you might have an idea how it works. It isn’t complicated, I’m just probably not the right person to try to explain it
P.S. I really do know that 4 * 5 = 20, not 24. :smack:
OK, that’s simple calculators explained, but what about the more expensive versions? How does a calculator do symbolic integration, for instance? Does it build an AST in software or does it use hardware for that, too? Furthermore, if these more expensive calculators are doing the integration/differentiation in software, are they still doing the basic stuff in hardware, or is it all in software?
The only thing connecting all calculators (and every other peice of digital electronics) out there are the transistors. As I stated, they can be combined to make gates, but don’t necessarily have to. They can be put together in other fashions to perform the same functionality quicker through outher designs. The gate level is where most digital engineers start learning at.
The advanced calculators are almost certainly written in software and executed on the calculator by the microcontroller. The software is then “downloaded” into the memory of some chip and called firmware. The microcontroller reads the memory and executes the code just as a computer reads the code of your operating system and provides you the interface. The basic premise of the microcontroller is to take the very complex algorithms that would just be uneconomical to try to build as hardware (due to size and design time) and reduce the complexity into the simple functions that the microcontroller can handle. Without having experience building a microcontroller or writing the software, it is hard to see where a function should be implimented in the hardware or in the software. So, hopefully, as you can see, the calculators made with software are indeed really only doing all the basic hardware functions, just in particular ways that’ll yield the advanced stuff.
I’m sorry I can’t tell you how it’ll do some of the higher functionality out there as I haven’t had experience with it. All I can say is that as designers add more and more complexity to the calculator, they are building upon all the technology below it. The software designer doesn’t care how the software gets executed on the microcontroller, just as the microcontroller designer doesn’t care how you build a transistor, and the transistor designer doesn’t care about where the materials come from, etc…
Kevin
As a minor nitpick, 29/7 = 4.142857…, not 4.142897… And no, I didn’t get out a calculator to check that, it’s just that the sevenths follow a very nice pattern.
There are a lot of advanced “calculators” on the market, some made by HP, some made by Casio, some made by Ti. I’ve only had to work with a few, but here’s my take on it.
The Texas instruments Ti-89, the calculator I have had the pleasure of working with runs a 12Mhz Motorola 68000 32-bit CPU with I believe about a megabyte of RAM and about 3 megabytes of Flash ROM. The operating system it runs (that includes all the symbolic math engines and graphing) is a binary file of about 2 megabytes. As far as I know it has no special hardware to accelerate any mathematical operation.
Ti-89 is not really a “calculator” in the classical sense, although it is marketed as such. It’s more powerful than early PCs, and can do pretty much all of the same stuff (although typing text is a pain). It runs plenty of third-party software, including text processing, graphics editing, engineering, games, communications, encryption, etc. I’d call it a mobile special purpose personal computer rather than a “calculator”.
It is not true that calculators were always implemented on microprocessors or microcontrollers. In fact, the very first microprocessor was invented when a calculator company asked Intel to design a special purpose calculator chip, and they implemented a general purpose computer instead.
Today processor chips have special hardware units to do arithmetic. They are typically divided into Integer Arithmetic Units and Floating Point Units (FPUs) to do fractions, etc. They have subunits to do multiplication and division, and usually an Arithmetic Logic Unit (ALU) do logical operations, adds, and shifts. High performance chips will have multiple arithmetic units, since multiplication takes a long time, but I doubt calculator chips do.
In the good old days some arithmetic was done by little microprograms, and the IBM 1620 had a lookup table for multiplication (or was it addition?) inside memory.
Fancy shmancy math is done in software. Packages doing this have been available for a while, not that memory is cheap so these fit inside a calculator, they’ve migrtated into inexpensive calculator. Ditto the cool graphing functions most high schoolers learn.
I could probably find out the specific chips that go in them. I would be tthey are custom, since cost is a big issue and volumes are very large. The same chip would be reused for lots of price points with different amounts of memory and software. Power consumption is a big deal also.
BTW I received a book to review that teaches computer arithmetic by programming a calculator. It’s at home, but I can post a reference if anyone cares.
I should note that the motorola 68000 series of CPUs are general purpose chips used in a few desktop computers, including the original Apple Macintosh.
If we’re talking chips with FPU’s in them, like the Pentium, then calculation methods get real interesting. Take division. It doesn’t work like the longhand division algorithm we learned in school.
In the Pentium FPU, it starts with a lookup table, and then the answer is made gradually more accurate by an iterative method that takes a guess, does the corresponding multiplication, and then tests how far to go in which direction to get closer. I think I remember that they used Newton’s method. This is embarrasing, but I think some of the methods actually require division themselves, so they are obviously not a way to do this - and am confused on this point. But in any case it’s interesting that they use iterative methods rather than the algorithms we use.
Intel has a great deal of literature about what happens inside the FPU. Also, it’s not hard to program it in assembly - it has an 8 register revolving stack, and the registers hold 80-bit IEEE floats. If you keep your numbers inside the FPU for all the steps of your calculation you get better than the 64 bit IEEE double-precision accuracy that you get if you bring the numbers back out into main memory (though there is a somewhat klunky way of dealing with the 80 bit numbers outside the FPU).
29/7 is about 4.142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857143, and it is a nice pattern.
It’s a debatable point even among people with lots of experience building CPUs, as we saw in the CISC vs. RISC debate in the 1980s/early 1990s.
(End result? Either RISC won by becoming CISC on the outside, or CISC won by becoming RISC on the inside.)
I was all set to correct you, seeing as how I learned it as Newton-Raphson, but then I found this cite which has this to say:
Sometime back (ack, about 13 years now) I had need to do decimal division with greater precision than allowed by long doubles, so I implemented a division function in C with 38 digits of precision using Newton-Raphson. Good times.
My first post was eaten; I just wanted to point out that computers & calculators need not electronic at all. The digital ones would be the same at the logic level, though, so it’s essentially the same (except that an adder is really big instead of a few microns in size ).
As for symbolic integration, I’m not entirely knowledgeable on it, but it generally involoves an algorithm that simplifies the formula into a consistent standard form, which can then be run through another algorithm in a straightforward way. It’s not uncommon in both symbolic & numeric methods to, say, find a polynomial form of the formula.
I’m not really familiar with how programs do symbolic integration, but if I had to guess, I’d say that the program knows the basic rules of integration (i.e., it’s a linear operator, powers behave a certain way, integration by parts, the chain rule, etc.) and that it has lookups for some of the more exotic functions. How they get it to run fast is another story entirely.