.NET isn’t interpretted. You’re right that what comes out of Visual Studio is MSIL, a pseudo-machine language for a virtual machine not unlike Java p-code. But unlike Java, the MSIL isn’t interpretted at runtime.
At load time the MSIL is compiled into target hardware instructions, x86 or whatever. And then that machine code is run, just like native assembler code. Part of why the .Net Framework is so heavy on every client (not just development stations) is that it includes the full-blown MSIL-to-machine compiler suite for each source language that produces a slightly different flavor of MSIL. (Why there’s more than one flavor is a mystery to me.)
I agree that compared to assembler or to C++ there’s a lot of run-time overhead ncurred in .NET by things like using the CLR object model to store a collection when in simpler environments you would have just malloc’ed a wad of RAM and stored the found int32’s in adjacent 4-byte spaces.
But to say your code (or the CLR’s System.etc.etc DLLs) are interpretted is just not accurate.
You’re quite right, and it was lazy of me to say it was. I only meant it was “comparable” to an interpreted language by virtue of having an expensive runtime sitting on top of it. and that often calls that would only be a few instructions in C/C++ compiled straight to an architecture, turn into hundreds of instructions when run through .Net. Least, that’s what I’ve read.
You’re right though…comparing what the CLR does to what Applesoft did was incorrect, and sloppy.
The differrence between the CLR execution in .NET and a true interpreted language like Apple BASIC is immense. You’re comparing efficiency of the languages as much as the speed of the machines. To get a better comparison, you need to rewrite that program in 6502 assembler.
Yeah, that’s very true. I would be quite interested to know how much faster assembler would be than the interpreter.
However, I don’t have an Apple ][ available (I actually gave my Apple //c to a buddy a few months ago) and I don’t know boo about 6502 assembler, so someone would would have to do the legwork on that one.
Not having a Mac or knowing 6502 assembler myself either, I imagine we could get a rough approximation of the compiled vs interpretted speed difference by writing the program in 8086 assembler and comparing the runtimes to the same program running in the old QBasic interpretter.
The good news is the 8086 ddn’t have floating point division in hardware either, so Qbasic (and our assembler code) would have to rely on software to do the division. Just like the 6502.
The bad news is that since the program is so extremely division-heavy, any difference in the efficiency of the FDIV library routine could make a 2x - 5x speedup/slowdown in overall code execution. If one was particularly brute-force while the other was extra-elegant, we might get more like a 100x difference in that one area alone. That’s worst case & I doubt it’d turn out that way, but without disassembling both FDIV routines we’d never know the magnitude of the difference.
My gut, like Sam Stonesaid, is 50 to 200x overall.
The question amounts to how many machine instructions (times their cycle cost) does the interpretter need to execute to emulate what would otherwise be one machine instruction (times its cycle cost) in the compiled/assembled code. Considering register handling, IP maintenance, segment calculations for addresses, etc, even a simple instruction like MOV might take 20 or 30 interpreter instructions. The more complex machine instructions take even more instructions, and probably more expensive instructions to boot.
And if interpretation includes any looping to implement a single instruction (eg symbol table search rather than refer to a fixed address), then you’re really off the chart on the speed ratio.
Interpreters, particularly the early ones were all about optimizing RAM consumption, not CPU cycles. Fast they were NOT.
Good greif. I had no clue the difference would be that big.
So yeah, comparing the time it took my app to finish was totally different from comparing it to the time it’d take his app to finish, chip architectures aside.
The real thing we got out of this thread was using my code to count the number of times through the “division” part of the process, and using Bytegeist’s metrics of how long the Apple II took, to make a large estimation of time to completion…whatever that was worth.
Regardless…As a personal experiment, I rewrote the algorithm to do 3 things to optomize it.
Skip every other number, ignoring evens completely
counting up, and storing all primes (I stored them in an array for speed…a collection proved MUCH slower) going up. This way, I only used primes in the modulo function. if a number isn’t divisible by 3, there’s no reason to check 6, 9, 12, etc.
3)stopping the inner loop when I am higher than the square root of x, as going beyond is a waste
This took the process from taking around 10 minutes on my box, to taking less than 1 second on the exact same box.
It reduced the number of “modulo” operations from over 18.7 billion to just less than 13 million. QUITE a reduction.
Finding all the primes up to 1 billion took just over a minute.
I will post the code if anyone is interested.
Am I the only one who has really enjoyed this?
BTW, LSLGuy my favorite line of the whole discussion was:
I’ve got both an Apple ][e and a working knowledge of 6502 assembly language, but my mathematics programming skills are weak. If anyone cares to help with the math routines, or can provide web pages with them, I’d be happy to work something up.
As I said earlier, this is so cool. I’m sure we’re treading in some waters that a prime number theorist covered a long time ago, but it’s interesting how what seems to be a simple problem at first has so many dimensions that can tweaked. I thought it was especially clever to eliminate multiples of a number that had already failed. I can’t believe that reduce the number of calculations from 18.7 billion to just less than 13 million. Well Done.
QED, you can look at The Apple Assembly Line for some pre-built routines, including some on prime sifting. This site has the basics of Assembly Programming, which is more than I know. That first list is all files stored in the archaic / deprecated / ancient BXY format. A few sites I checked seemed to think WinZip would know what to do with them, but I had no luck.
Thanks, Jurph. I don’t know what to do with BXY files, either, so it looks like that site is more trouble than its worth. I only need a couple basic math routines to do division and/or check whether one number is divisible by another. I could probably work out a routine by myself, given enough time, but frankly I don’t want to put any more effort into reinventing the wheel than is necessary. Programming doesn’t blow up my skirt like it once did.
In a high-level language, Eratosthenes’ method is pretty simple to code, but it’s not going to be quite as graceful in assembly. I’ll think about it.
Oh, and Chronos–the input size of arithmetical algorithms is measured in the number of bits (generally denoted [symbol]b[/symbol]), so an algorithm that’s quadratic in N would be denoted as O(2[sup]2[symbol]b[/symbol][/sup]).
Indeed. In case you’re not familiar with the 6502, it’s important to bear in mind that it has only two 8-bit registers and one 8-bit accumulator to work with. It’s very kludgy compared to 80x86 assembly programming!
Upon further research, it looks like .BXY files are Apple Binaries :eek: which, provided you can get them onto an Apple-compatible disk (you still have a 5.25" drive for your PC, right?) should run just fine on your Apple. Really, that just makes it easier, because you’ll have your references right there on the same machine you’re coding on. If you don’t want to ALT-TAB back and forth between two Apple programs :rolleyes: then you can print the documentation and routines on your dot-matrix line printer. You still have some tractor-feed paper around… right?
If I want to remain faithful to the algorithm presented in the OP, then I should think so. At the very least, I’d want a modulo arithmetic routine. I’ve been half-heartedly poking around with Google, with little success. I found this page, which from a cursory glance appears to have what I need embedded in it, but it’s too much like work to strip it down to the bare essentials, which is what I want to do.
Also, bear in mind that I have to hand assemble the program and type in the requisite machine code directly, since I have no assembler for this machine and no way to download one for it. If someone knows where I might find online the AppleSoft BASIC source code for a simple 6502 assembler, I’d be grateful.