Just a quick note on this. I don’t want to go down a rat-hole over this business of slow (relatively speaking) interpreter performance because it was just a comment I made in passing that I thought was self-evident. There are clearly a number of reasons that interpreter performance may not matter, the most obvious of which is if the processor is fast enough relative to the application performance requirements that the impact on computational performance just doesn’t matter, which may often be the case with today’s fast processors.
Similarly, it’s just a truism that if an inefficient interpreted language is able to offload all its compute-intensive functions onto a bunch of optimized library modules, leaving the interpreter itself with nothing much to do, then clearly interpreter performance doesn’t matter. No doubt this can and does sometimes happen, but I don’t think it’s generally a very good argument. For one thing, it assumes that an interpreted language environment can, in fact, interface with such libraries, which is far from a given. It was considered rather innovative back in the day when some LISP implementations could run a mix of compiled and interpreted modules.
But more to the point, many languages are written specifically to facilitate the coding of certain types of computations: FORTRAN for scientific and engineering calculations, for example, or LISP originally as a computational implementation of a mathematical notation similar to the lambda calculus but soon transformed into a popular platform for AI applications. In such cases I’m doubtful that such targeted platforms could be reduced to mere shells executing calls to library routines; it seems to me that the very nature and purpose of the language would dictate that most of the code would be written in the language itself, and libraries would support only the most common and utilitarian functions. Things like, for instance, the venerable old Scientific Subroutine Library (SSL) for FORTRAN (now called ESSL – Engineering and Scientific Subroutine Library). Moreover, in keeping with the targeted purpose of FORTRAN that I mentioned, the SSL itself was – unsurprisingly – written in FORTRAN.
Again, I don’t want to make a big deal of this performance thing – I mean, I have a desktop computer in front of me that’s probably about a hundred times faster than I mostly really need, and it can run a simulation of a PDP-10 timesharing system many, many times faster than the original multi-million dollar mainframe – but I just wanted to clarify these points. And to be very clear, I fully appreciate that even when dealing with efficient compiled languages, it’s often advantageous to hand-code certain extremely compute-intensive functions as subroutines written in highly optimized assembler, where sometimes knowing how to eliminate one extra instruction or shave a microsecond off the execution cycle of some critical loop makes a huge overall performance difference. In fact it rather frustrates me that kids today tend to have little appreciation for the concepts of efficiency and optimization.