How Much Faster Is My Computer Than An Apple ][

In 1981 (I think) my father bought an Apple ][, and like everyone else at the time, I learned a little Basic. One of the little programs I wrote was to find all the prime numbers between 1 and 1,000,000. It ran for a few hours without having found one prime number—it was counting down—until someone had to use the computer for something more productive. Anyway, I don’t remember how to write in Basic or have a programing environment, but I was wondering how long the same task would take on today’s computers? Here’s a Basic-ish version of what I remember programing:

10 X=1000000
20 Y=2
30 Z=X/Y
40 If Z is a whole number then goto 90
50 Y=Y+1
60 If Y>X/2 then goto 80
70 Goto 30
80 Print X
90 X=X-1
100 Goto 20

How long would it take to run this on today’s computers?*
*I realize that the performance of “today’s computers” varies greatly, but I don’t want to limit the responses to just owners of 867 MHz PowerBook G4s.

867MHz is about the rate that
Edsger Dijkstra is spinning in his grave right about now…

References I can find rate the 1981-era Apple computers as running at about 2MHz. The Intel Itanium processor runs at about 1 GHz.

There are far too many variables to say for sure how long your algorithm would take to run now, but I’d say about 500,000 times faster, give or take a zero or two.

Er, 1 GHz / 2 MHz = 500, no?

Hey, it’s morning here on a weekend. At least I included a fudge factor. :smiley:

Of course, speaking as someone with a computer science degree, I suppose I’m supposed to throw in some hand-waving hemming and hawing about the common use of dual processors and the fact that newer processors are 64-bit whereas the Apple ][, while not a two-bit computer at the time, was not a significant multiple of that number.

And of course, remember that even processor that are the same clockspeed/bitness can have widely varying performance. For an example, a Sempron 3100+ running at 1.8ghz will easily beat a 1.8ghz Pentium 4 in any application that I know of. Both are 32bit processors.

Perfomance equals clockspeed x average IPC (Instructions Per Clockcycle, which varys depending on the application & proccesor design.) IRRC, it wasn’t until Pentium I era that consumer processors could average more than one IPC.

I actually paid my subscription just to answer this :stuck_out_tongue:

Anyway, I rewrote your algorithm to VisualBasic.NET. I’m not sure how compatible our algorithsm are, but both are “interpreted” (In a manner of speaking, I mean, .NET’s CLR environment is interpreted in a sense) so I think there’s a good correlation between the two.

I imagine if I rewrote it in a true C++ console application, that the speed would be much greater.

anyway, here’s my .Net algorithm (Note, I know it’s not great code…but I wrote it by looking at the OP’s code (heh, as opposed to OP codes) and it gets the job done. (I’m always very self concious of others looking at my code anyway.)

so here’s what I ran:



Private Sub btnPrimeNums_Click() Handles btnPrimeNums.Click
	Dim dtStart As DateTime = Now
        Dim tsSpan As TimeSpan

        Dim x, y, z As Integer
        Dim blnFoundNonPrime As Boolean

        Dim col As New Collection

        For x = 1000000 To 3 Step -1
            blnFoundNonPrime = False
            For y = 2 To (x / 2) Step 1
                If (x Mod y) = 0 Then
                    'we found a "non prime" number, so exit
                    blnFoundNonPrime = True
                    Exit For
                End If
            Next
            If Not blnFoundNonPrime Then
                col.Add(x)
            End If
        Next

        tsSpan = Now.Subtract(dtStart)
        MsgBox(tsSpan)

        Dim strResults As String
        Dim n As Integer
        For Each n In col
            strResults += CStr(n) + vbCrLf
        Next

        txtResults.Text = strResults
        
End Sub


Note that there are some major differences. For exmaple, in your code, when you found a number, you just printed it to the screen. Highly efficent and quick. When I found one, I added it to VB.Net’s Collection class, which is a bit more expensive.

At the end I pop up a MsgBox telling me how much time had passed. Then I dumped the collection to a string, and displayed that string in a TextBox.

This is, of course going on on an Intel (CISC opposed to your mac’s RISC) machine, and not a particularly powerful one, running the .Net framework on top of windows.

I realize this is comparing Apples(heh) and Oranges in a manner of speaking, but it will show a difference in performance between older technology and newer technology.

Mine has finished in the time it took to write this post, wheras after several hours, yours hadn’t hit one prime number (FTR, the first prime number it would hit would be 999,983…so it hadn’t even iterated 17 numbers in the outer loop)

I ran this code on a WinXP Pro machine with a 2.0 Ghz Pentium 4-Mobile and 512k of ram. Note that since this is a multi-tasking OS, my computer was doing other things while the process was going on, whereas your computer was dedicated completely to finding the prime numbers.

Results:
Number of Prime Numbers found: 78,497
Time Span: 10 Minutes, 6 seconds

What this tells us, I haven’t a clue. But stuff’s faster today than it was then, that’s for sure :wink:

Steve

According to this page the Apple II ran at about 1,000 floating point operations per second.

Sorry to quote myself. That won’t work. That was an earlier revision of the code (before I realized that TimeSpan doesn’t convert to a String as it should. I rewrote the code to say:



MsgBox (CStr(tsSpan.Minutes) + ":" +CStr(tsSpan.Seconds"))


I coulda used string formating, but I don’t use TimeSpans very often and didn’t want to look up how to format them.

Anyway, just wanted to clear that up if anyone was going to run this themselves.

Steve

As has been mentioned, to a first-order approximation you can go by clock speeds. (By the way, the 1981 Apple IIs ran at 1 MHz, not 2.) That comparison is crude of course, since the processor architectures are so different. I might add another factor of 10 to 100 or so to compensate, roughly, for everything that’s been added over the years: larger registers, more registers, more addressing modes, more instructions (notably multiply and divide, which the 6502 lacked), pipelining, memory caches, and so on.

However, I happen to own an Apple II and had nothing better to do this afternoon that to run your program as a benchmark and make some timing measurements. My other machine is a 400 MHz G3 Mac. Not the most modern machine either, but within the ballpark anyway.

I made a couple changes to the program though: (1) As written your program does not terminate, so I added a check to exit when X falls below 2; and (2) I reduced the starting value of X to 1000 to make the running time on the Apple II reasonable. (I don’t have all day for this horsing around.) Understand also that the Applesoft BASIC interpreter adds a lot of overhead, as I’m sure you remember. Understand also that Applesoft rather dumbly does all arithmetic in floating point, and that it has no modulo function, which would come in handy in lines 30-40.

So this algorithm runs a slot slower, even on the Apple II, than it theoretically needs to. A more fair comparison would be to rewrite this program in 6502 assembly language and run that version instead. But I’m a little rusty and as I say, I don’t have all day for this.

On the Mac I don’t have any version of BASIC, so I rewrote the program in C and ran that instead. This program ran so fast that I had to embed it inside a loop of 10000 repetitions to make the running time credible. Otherwise it was less than one process time-slice.

Final results, for what they’re worth (maybe not much): The Applesoft program on the Apple II ran in just over 12 minutes. The C version of the program, compiled and run on a 400 MHz PPC, ran in 0.005 seconds, after correcting for the number of repetitions. This gives a speed ratio of about 150,000 between the two “platforms”. Avoiding Applesoft by using 6502 machine language instead might reduce this ratio by a factor of 10 or 20 or so.

I realize this was a very crude experiment. But it was fun anyway, and gives a rough idea of how much things have changed.

Here you go: Complete 6502 Instruction Set :slight_smile:

Thanks. That’s so cool. I tried doing the math, but after filling a page with numbers, realized it’s a good thing I work in a right-side-of-the-brain field. Extrapolating Big-Ole-Steve’s and Bytegeist’s numbers, how long would it take to get to the first prime and through the entire program?

How long on an Apple II, I assume you mean. :slight_smile:

For each candidate prime, your program tests all divisors from 2 up to half the candidate’s value. (This is excessive of course. You really only need to test up to the largest integer less than the square root of the candidate. Also, you could skip even-numbered candidates entirely for another big time saver. But anyway.) Going downward from 1,000,000, as your original program does, the first prime it encounters is 999,983 — still pretty close to a million. Thus the first prime won’t be printed until about half a million divisions have been tested.

In Applesoft BASIC that’s quite a slog of computation. In fact, just like you, I didn’t have the patience to let the program run that long when I tried it again just now. However, when the program had reached X = 999,983, I aborted it after 7 minutes of waiting. At that time, Y had reached a value of 23511 … meaning that the program was plowing through about 3359 division tests per minute … meaning it would take about 149 minutes (2.5 hours) to do the full half million of them.

So that’s the answer to your first question. Estimating the total run time of the program is slightly trickier. Your program is Order N[sup]2[/sup]; that is, its duration is proportional to the square of the starting number. If you double the starting number, you quadruple the run time. If you multiply the starting number by 10, you multiply the run time by 100. And so on.

With that in mind, I estimate your program’s total run time to be 12 minutes times 1000[sup]2[/sup] — or 12 million minutes — or about 23 years.

You’ll pardon me if I don’t test this empirically.

In other words, If he ran the program when the Apple was brand new, it would just be finishing up now… :eek:

Now, we’ve compared an older model personal computer with a newer model personal computer. I wonder how long would this task would take with a person computer. That is, how long would an actual human armed with a pencil and paper take to compute all the prime numbers between 1 and 1000000? It should be a given that the human has the understanding of what a prime number is and how to detect one.

I’m not actually sure that’s correct. Remember, we’re dealing with prime numbers here, which get progressively thinner as you move up the number line, and the program will test most non-primes much more quickly than it will primes (for instance, it’ll only take one division to rule out a multiple of 2 (which the code should have skipped in the first place), two divisions to rule out a multiple of 3, and 4 divisions to rule out a multiple of 5). Based on that, I would expect the asymptotic run-time to be more like N[sup]2[/sup]/log(N).

Then again, I wouldn’t expect an asymptotic form for something involving the Prime Number Theorem to be too close until up around Skewe’s Number, which seems to be in the vicinity of 10[sup]176[/sup]. So the run-time for this program, for numbers as small as a paltry few million, could be wildly different. Certainly, we can take O(N[sup]2[/sup]) as an upper bound, but I’m not sure how much more we can say.

Finally, just to confirm: When you ran this program way back when, did you do any sort of checks to make sure that your program was correct? Say, running it with a smaller starting value, or starting from the bottom? Running for hours without a single hit sounds suspicious to me, especially since most of the numbers you were testing would have gone quickly (I haven’t yet found the factors for 999997 or 999987, but all of the others have a factor less than 20).

I’m sure I ran a test starting with a smaller number—probably 25 or something. However, since this was 23 years ago, I can’t remember how long I let the program run.

Perhaps someone could append Big-Ole-Steve’s code to count the total number of division tests required to find all the prime numbers between 1 and 1,000,000. Then divide the result by 3359 (Bytegeist’s estimate of the number of division tests an Apple II could do in one minute) to get an accurate idea of how long my original program would have taken to run.

Also, if Big-Ole-Steve’s code could count how many division tests are required before reaching the first prime number (999,983), we could determine the maximum amount of time I let the program run 23 years ago.

Thanks everyone.

When you compare the Apple II to a modern computer, don’t forget memory latency and bandwidth. The 2 MHz 6502 processor used in the Apple II did not need cache memory. The memory was fast enough that not only could it keep up with the memory accesses of the 6502, it could simultaneously support memory mapped text and graphics video without slowing down the 6502. Compare that to a modern processor with two or three levels of cache between the CPU and main memory, all in an effort to reduce the harmful aspects of the gross speed mismatch between the CPU and main memory. Increases in memory speed have not come close to keeping pace with increases in processor speed. It’s a bit like putting the tires from a Ford Pinto on a top-fuel dragster.

This is the best thread I’ve ever read.
I love you guys…

<sniff>

Done and Done.(Heh , god I love this place)

The first prime number required 500,804 “divisions” (Technically I used Modulo, which apparently wasn’t available to you at the time)
The entire process required, get this, 18,792,204,052 “divisions”

18.7 BILLION times through the “inner loop.”

So, going on Bytegeist’s estimate of 3359 “divisions” per minute, that means it would take 2 and a half hours to hit the first prime. You almost made it!!!

The entire thing would take 3885 days, or 10.5 years.

(Someone feel free to check my math. I usually forget a divide by 60 when I do day/time math)

Steve