When I first started using the Macintosh, it was a very different computer than the PC of that era: I had a mouse and they not only had none, I’m not sure they even had a place to plug one yet; the PC floppy was 5.25" to the Mac’s 3.5". If I recall correctly, the interaction between my floppy drive and the CPU was through a proprietary chip on the board called the “SWIM” and the “W” actually stood for “Woz” as in Wozniak. I don’t know what kind of controller apparatus appeared on the PC motherboard when PCs became mousified, but I bet it wasn’t the SWIM chip.
Expandable Macs took NuBus cards, whereas PCs’ cards went into ISA slots. Nowadays they both use the PCI bus instead, but I gather that was a nontrivial difference.
Fast-forward to the era where Apple is rolling out the Intel Mac, and it turns out that the only remaining under-the-hood architectural diff making a Mac less than fully Windows-compatible is the EFI versus BIOS issue (which was solved first with a hack and then officially with Boot Camp).
That would not have been the case with, say, a Mac IIci. If Apple had switched from the 030 to the 386 as their engine, a IIci would not have been anywhere close to Windows-ready, isn’t that correct? Because even with an Intel processor, the rest of the Mac architecture was so different from the PC architecture. Nor would a Motorola-CPU’d Gateway have been able to boot System 7, even with a ROM.
I am interested in the history of when the various “deep differences” in the two architectures disappeared. By that I mean “things that would have had to have been reconciled in order for the other OS to boot at all”, not things on the level of “well the Mac’s floppy drive did not have an eject button” or “the Mac only had one mouse button”. Unless there is no such meaningful distinction and things like NuBus versus ISA and it’s all just a matter of “well they’d have to write a driver for that device”.
Tell me what you know and/or supply links if you know of an architectural history site that discusses such things.
That’s a long, complex question. Consider, though, that as long as the software (drivers for the architectures, etc) are there, there’s nothing inherently incompatible. Look at the wide variety of architectures that Linux will run on. Heck, there’s probably a Yellow Dog Linux out there that will run on the old 68030. Remember that yesterday’s PPC’s had 68040 emulators for the processor, and today’s Intels have PPC emulators for the processor. The AmigaOS has been run on different architectures, as has BeOS and probably countless others. Heck, Parallels emulates all of the hardware on a computer.
Of course if you take an off the shelf package of software – say Windows 3.11, it certainly won’t run on a Mac with an Intel chip on Nubus architecture, because as you noted, it’s missing drivers to be able to do so. Well, DOS is missing the drivers, too.
I’m not a Windows maven, but I believe that Windows 3.x (and 95?) was still dependent on functions of the PC BIOS firmware — even while running, not just at boot-time. Which of course means that your new firmware — note you’ll need completely new firmware anyway, since the Mac ROM was 680x0 code, not i386 — will need a copy of the BIOS somewhere in it.
It also seems to me that the PC BIOS depended on certain ranges of the address space being allocated in the way of the original IBM PC: the video RAM of the text screen, for example. Windows at that time might have had those dependencies too. But the Mac of course had no character-cell video at all.
Large portions of Apple’s pre-OS 9 system were written in 6800 assembly language. The switch to intel processors had to wait until Apple had a new OS that was completely written with portability in mind. Even so, many PC cards won’t operate under OS X, because there are no OS X drivers for them (even though the hardware is compatible).
I don’t know the current situation on the two platforms, but in the pre-OSX days the Mac mouse was software-driven, while the Windows mouse was hardware-driven. I learned this from an anecdote about Bill Gates talking to Steve Jobs in the earliest days of the Mac when MS was first writing Word (which appeared on the Mac first). According to the story, Gates asked Jobs what hardware he was using to drive the mouse, and Jobs was surprised that the leader of the world’s largest software company would immediately assume a hardware solution.
As far as overall hardware convergence, it started with the Mac G3 machines. That’s when Apple started to drop much of their proprietary components in favor of “industry standard” components. For example, adopting USB instead of ADB for connecting keyboards, mice, and printers; switching to ATA/IDE hard drives instead of SCSI, switching to the same type of RAM used in PCs, adopting PCI slots along with the PC crowd, etc. This switch benefitted me when the PC I had (built for me by a friend) suffered a C: drive crash (among other hardware problems). I had fortunately reserved the C: drive for just Windows and applications, and stored all of my documents on the D: drive. I was able to remove the D: drive and install it into my G4 Mac’s extra hard drive slot and recover all of my files.
That’s not necessarily a problem, though. Remember that a lot of the OS was written and/or compiled into 680x0 assembly, but the PPC transition happened anyway. The PPC emulated 680x0 instructions, even for large chunks of the operating system. Subsequent releases of the Mac OS actually got faster as more and more of the ToolBox functions were updated to native code. Also today the Intel version of Mac OS X has Rosetta, which is just a PPC emulator.
There was a long-ish period of overlap where the classic MacOS was made for both 680x0 and PowerPC machines. (MacOS 7.1.2 through 8.1 — about 1994 to 1998.) Theoretically, if there had been some motive to do it, Apple could have made an Intel version as well.
Sure, they could have, but what would the point have been? There reason Apple went to x86 was two-fold: 1) Better price / performance and 2) Better low-power (laptop) processors.
If Apple had a emulated OS, they would have taken a huge step backwards in performance by moving to x86. To prove this, note how they dropped support for 6800 applications on intel machines.
Well no point at all of course, in our timeline, with events unfolding as they did. But theoretically Apple could have picked the Pentium as their new CPU, back around 1993, instead of the PowerPC.
What I’m saying, and what I think Balthisar is saying, is that the classic MacOS was already portable enough to undergo a radical switch in architectures —though admittedly while needing the help of a 68K emulator for many years as various chunks of the OS were ported over. The same process could have happened with x86 as the destination, if that had been the choice.
I believe the porting process was helped by the fact that most of the OS code was in C by the early 90s. By the time MacOS 8.5 was released, I think the only remaining 68K bits were there because they were actually written in 68K assembly, and were inconvenient to translate. (This is what I remember reading; I could be mistaken.)
I didn’t know they ever supported the 6800.
Man, that was a cool little 8-bit chip. It will be missed.
6800, 68000 - what’s a factor of 10 between friends?
Yes, once Apple made the PPC transition they could have just as easily gone to x86, although at the time the PPC was probably higher performance. I was at the '88 developer conference where they were showing code running on a AMD 29000, but that obviously never went anywhere. There were complicated “political” reasons why moving to the x86 wasn’t a good idea at the time, and if IBM / Motorola were able to keep up the R&D, they probably would never have switched (although being able to run virtualized Windows is pretty handy).
A comparatively latter-day example of the kind of structural “deep difference” I am talking about is NuBus versus PCI.
Apple released OS X (in its original 10.0 incarnation) to be backwards-compatible with all G3 Macs and beyond, but did not deign to support older PowerPC Macs like the 9600 towers with their PowerPC 604e processors and so on. A guy named Ryan Schmidt made a clever hack and released it to the Mac-using world as XPostFactor, and had users of 9600s, 7600s, 7300s, the pre-WallStreet “Kanga” PowerBook, Umax clones, DayStar clones, Power Computing clones, etc etc, booting up OS X. He kept doing it as OS X developed further and Apple kept cutting off more legacy users: no more old-world pre-USB machines supported after 10.2 but XPostFacto provided drivers for serial-port & ADB Macs with old-world ROMS to boot Panther and, later, Tiger (which also turned up its nose at non-FireWire Macs), and so on. Quite a bit of “borrowing” of older drivers and bits of pieces and whatnot, including apparently in some cases incorporating someone-or-other’s freeware hacked-together drivers for bits and pieces for which an Apple OSX driver had never been written.
But never for a NuBus era Mac. I don’t mean “sorry, but there’s no way we can make a driver for your NuBus CARD”, I mean no possibility of getting OS X to boot on those machines, period. Even for Macs like the 6100 that didn’t actually have a NuBus card slot, but which apparently still had the older controller card stuff on the motherboard (?). This apparently was a quantum leap in difficulty from simply writing a driver that would let OSX deal with some minor nonstandard device. This despite NuBus not being a closed proprietary architecture itself. From what I read, the amount of changes to OSX that would have been necessary to allow it to boot from a NuBus era Mac were insurmountable for anyone short of a full team getting a full-time salary to work on it. Whereas little freeware hacks exist to print to old Mac serial-port printers like the StyleWriter or use an old flatbed SCSI scanner that never had an OSX driver.
I wonder if that’s an Open Firmware booting issue, not so much a NuBus/PCI issue. All PCI Macs have Open Firmware. I think all NuBus Macs lack it, and instead fall into the classic Mac ROM, which wasn’t designed to accommodate alternate operating systems. (You can’t intercept the boot sequence before it plows ahead, searching for an HFS volume with a Mac System folder on it. By the time your OS can seize control, the machine is already running MacOS.)
The solution to that is called a second-stage bootloader: Make the HFS volume with all of the stuff the ROM is expecting, but the code the ROM hands off control to is actually a small program that loads the real OS which is elsewhere on the disk. In principle, you can daisy-chain loaders like this indefinitely, with none of them getting wise to the setup. (This is kind of how you can dual-boot Linux and Windows NT: GRUB is in the MBR, so if you choose to boot NT GRUB chainloads NT’s bootloader, which is convinced it’s the only bootloader on the system, and it loads NT. It isn’t a secret and it isn’t at all frowned upon in circumstances like these.)
I was thinking such a thing ought to be possible, and composed part of a post, but then shied off due to feeling ignorant of how the early stages of boot actually happen.
So… back to question… if it’s that freaking easy, seems like there would have been an XPostFacto boot loader to run OSX on Nubus Macs. It’s no big issue today but there were a surprising number of folks seriously disappointed that no such thing appeared, back in the day.
And the word that came back from the proficient hackers was “totally ain’t gonna happen” and we were told it had to do with NuBus being a massively different architecture, or using different interrupts, or some such thing, where you couldn’t just ignore them and couldn’t for some reason just hack out a driver, “NuBus.kext” or some such thing, and expect that to work.