I’m compiling a midsized X11 app from source code. My machine isn’t a laggard (1.67 GHz) but it’s still not a task I’d start a half hour before needing to shut down and head off for other parts or something.
So as I sit here idly watching the commands & feedback scrolling up the screen, I’m wondering how long this kind of thing took on the machines of yesteryear, like a '386 or an '030 — would it have taken an entire weekend to bake what I can do in a couple hours nowadays? Of course, while I tend to think open-source X11 apps probably haven’t bloated up at the rate that commercial apps have, I suppose they still have gained in girth and complexity since the early 90s, but even so… ??
And how long does it take to compile an entire OS? Or is development “frozen” for different files & libraries at such different times and stages that it’s a meaningless question because no one ever does it that way? Hmm, but what about open-source operating systems, then? Let’s say you have FreeBSD or Linux on an installation CD but it only has the binaries for the most common hardware platforms, whereas the source code is available and set up to be compiled on some of the oddball ones like a MIPS processor but you have to do it yourself?
A P4 1.6 ghz machine is five orders of magnitude more studly than a 386, so chances are whatever is lagging your machine would crush a 386 like an insignificant little bug.
Remember, back in the day, Microsof Word fit onto a 92 kb floppy disk. Things took about as long to compile back then, but the programs were a lot smaller. We’ve add huge amounts of complexity and flashiness to programs as the machines have gotten more and more powerful, so it all evens out as far as time is concerned.
The first time I compiled a Linux kernel it took five hours. The kernel source has gotten a lot bigger since then, but now it generally takes 20 or 30 minutes.
Not really relevant to today’s systems, but in the early 90s Project Oberon, designed by Niklaus Wirth, the man who designed the Pascal language, was both a programming language and a multi-tasking, event driven operating system. On the modest machines of the day it was able to compile itself in less than 1 minute.
I just built the open-source instant messenger GAIM on my Windows PC, and it took maybe 15 minutes to half an hour or so. There was a couple interruptions when it crapped out because of some tweaks I had to make to my build environment, so it’s just a really rough guess.
I routinely build SIP servers on a Solaris SPARC machine, and that takes just a few seconds.
I’s more like the latter. Development’s not generally frozen for a given library until it’s ready to be released to the general public, but if you depend on it, you probably have the option of choosing from a limited set of pre-built copies that that library’s QA team has marked as integration-ready. The chief advantage of this isn’t so much the build time (no one really cares about that) but the change control: you can always verify exactly what set of source code you’re using, and everyone uses the approved versions.
There are still builds that take some time. I’ve seen a few that run for 45 minutes or so, ranging from a set of .dlls and libraries to link against them to a full application.
I work on a proprietary operating system used in industrial controllers. Back in the days when it was built on a vax, it would take approx. 9 hours just to build the parts that ran inside the controllers. Building the entire product (including HMI interfaces and such) took an entire weekend. We would literally fire off a build on friday night, then come in on monday morning and pray that everything built ok. On a modern 3 GHz P4, with advances in both computer hardware and compiler design, I can build the controller code alone in about 5 minutes, and the entire product in less than an hour.
Linux back in the days of the 386 was a lot smaller than what it is now. I recall downloading the entire slackware distribution onto floppy disks (through a blazing fast 2400 baud modem too! Wheee!). But, compilers were a lot slower too, not just because the computer was slower, but also because they didn’t know how to do a lot of tricks and algorithms that are used in modern compilers to really speed things up. Rebuilding the entire operating system in the 386 days wasn’t something you’d complete in a day.
Back in the days when computers were much slower, you wouldn’t recompile libraries unless you absolutely had to. You would just recompile the module that you were working on and re-link it into the code. But we still had to do complete builds every week to make sure that everthing actually worked together when built from scratch. There were rare instances when you could build things locally with a minor change and make it work, then have it fail when rebuilding everything and re-linking from scratch. These days, if I make a change to a library, I just go ahead and rebuild the entire library. I never would have done this in the 386 days because it took too long.
I also have an older laptop (Pentium 133) which builds an entire linux system for the ARM processor. This runs on an embedded processor board, so it doesn’t have X-windows and such. In other words, it’s a pretty trimmed down version of linux. To rebuild the entire system on my laptop takes about 8 hours. I imagine it would be less than an hour on a modern PC.
But Oberon, being a Wirth language, was Pascal-like and designed to be easy to compile in one pass. C and (especially) C++, the de facto applications languages these days, are rather more difficult to compile (especially if you intend to optimize).
As an extreme counterexample, Chuck Moore’s colorForth system, written entirely in his colorForth dialect of Forth, is largely uncompiled except for a kernel (written in x86 assembly language) that interprets the bulk of the system, which has been pre-parsed. Object-code is only created on an as-needed basis, and Moore claims it’s instantaneous. On modern systems, and given the simplicity of Forth, that is certainly plausible.
Hence the complexity of the make tool: All of that code to trace down dependencies and recursively move up the chain to the final product is founded on the assumption that it takes too much time to rebuild everything from scratch every time one or two files are changed.
And this isn’t so much a “yesterday” thing, either. Developers who constantly have to compile and relink don’t want to recompile entire packages, either. Yeah, lots of Linux’ers make and make install and are the heavy users quantatatively, but without the developers there, there’d be nothing to make!
Apple’s development tools use make internally, and they selectively recompile and relink – no suprise there. But then so does Delphi (Pascal). I’m about to find out in the next month or so whether J2EE IDE’s are also selectively smart about recompiles. I have to imagine that any competent development system – not just gcc/make based – only compiles what’s necessary and keeps the object files hanging around.
Finally… there’s nothing as fast as assembling machine language, but I don’t imagine that anyone uses it anymore. For anything. Nope, not even little embedded systems. Well, okay, someone uses it, but you never hear about fancy-shmancy assembly language IDE’s…
Dunno; in my experience (which is not entirely relevant to the question, as I’d be talking about custom apps I’ve written - sometimes quite large/complex ones - not open source software), compiling has always taken about the same amount of time; machines were slower way back when, but the demands we placed upon them were more modest; they were less complex, the volume of code and the number of linked libraries was smaller, even though they might be performing the same basic function.
The initial bit of code to kick off the OS in probably any OS is going to be assembler. So, at least the first…100 or so instructions will be assembly. There is some more for timing and making the system go to sleep, etc. Overall though, it’s probably all under 1000 individual instructions for the whole OS, which might be something like 1,500,000 instructions.
I guess I’m someone. The proprietary operating system I work on (used for real time control) has about twice as much assembly as C code in it. The code is built using visual studio for the IDE, with a lot of custom tools hacked into it.
Also, any time I use a microcontroller (like a PIC) I usually program it in assembly. Microchip has an excellent fancy shmancy IDE for their PICs. Not only does it have an excellent assembler in it, but you can even execute your code in a built in simulator without even programming an actual physical chip. It will even let you force inputs and monitor outputs in the simulator.
Mostly, though, if you are going to program in assembly, you don’t need a fancy shmancy IDE. Us assembly langauge programmers are used to doing things the hard way. IDE? We don’t need no steenkin IDE.
I used to compile Wirth’s Pascal compiler on a Honeywell Multics system in 1980. I don’t remember how long it took - but not very long, no more than a few minutes. It was portable, as long as you were on a 60 bit machine.
The general answer that there was less memory available, so less source, so reasonable compile times on even slower machines is what I remember also.
When I first got into Linux, the current kernel version was 0.99pl13 and I downloaded Slackware onto 32 or so 5.25" 1.2M floppies. I think my first Linux box was a 386-40 with 4M RAM, and it took hours to do a kernel compile. These days my fastest PC is an Athlon XP 3200+ and building the 2.6 kernel takes maybe 20 minutes or half an hour or so. It’s even faster on the multiprocessor machines here at work. Each processor may only be 700 MHz, but with 8 of 'em in one box, damn, that kernel builds fast.
I was at a talk by a guy who is working on a distributed version of gcc and his group was using kernels per second as a measure of the speed of their algorithms. That is, the number of linux kernels it could compile in a second. I think the original version of the compiler managed something like 0.1 k/s and the current one was doing something like 30 k/s.
I program a lot in assembly, sometimes because I want to perform unusual tricks, and sometimes just to keep up my skills. It’s far from a dead language.