BeOS

Whatever happened to BeOS? I remember downloading it off of my 56k all night, installing its big half gig file on my pentium 2 processor. It had 128 megs of RAM but BeOS could scream on that machine and It could play a different video on each side of the cube and rotate it. On my old super lame computer. It seemed to make the computer a super hero.

Why did it fizzle? Did someone buy the good parts of it and use them?

So Haiku is your only current option:

Be Inc. is defunct, isn’t it? That might explain some of what happened. The IP was bought by Palm, which spun off PalmSource, and so on and so on, so presumably someone has the right to use “the good parts”. According to Wikipedia, the last version of BeOS is dated November 2001, and the last version of Zeta is dated 2007.

Haiku has a version dated 2018, and is based on open source, so if you want to actually use it, that might be the place to look. They claim that “the 32-bit release is compatible with the BeOS at a binary and API level” and “can run most BeOS applications without modification or recompiling.”

BE bet everything on getting Apple to buy them to save the Mac platform. Didn’t happen. Be gone.

Yep - this was their goal almost from the beginning, and it’s why they moved from Hobbit to PowerPC for the BeBox platform. Apple ended up acquiring NeXT (including Steve Jobs) instead and NeXTStep contributes to a large chunk of MacOS and iOS to this day.

I miss my BeBoxen sad

I miss my NeXTcube; those were awesome. I confess I never got a BeBox, but I had some Macs with PowerPC processors. (I wish they still made affordable POWER workstations- do they?) Anyway, all of the above are easily emulable today, if you really need it, say to run the odd old program or game that never got ported.

POWER is still under active development with POWER10 planned for a release next year. But AFAIK the only application these days is in IBM mainframes.

They tried hard, too. MacWorld or some other magazine I subscribed to at the time came with an early version on CD. I remember how amazing it was on my G4.

When I was a Computer Science prof. I’d talk to BeOS folk from time to time. It survived far longer than I thought it would.

I still have a BeOS t-shirt that I wear here and there. And two BeOS labeled Pilot G2 pens that I keep refilling. One is my everyday crossword puzzle, etc., pen.

I have an decent amount of trade show swag from dead companies. (Plus a unworn launch t-shirt for Windows XP which is only mostly dead. The OS, not the shirt. ;))

What in particular made it run so well? It was years before I had a windows computer that could do the video cube thing that they showed off for it. A hobby of mine is taking outdated hardware and making it run better (this laptop I am typing on is a frankenlaptop with a secondary monitor that is 17 years old, still runs windows 10 and boots in maybe 15 seconds.)

BeOS was pretty lightweight for the day, and IIRC had a good software OpenGL implementation. But their “pervasive multithreading” was a bit ahead of itself (hardly anything had more than one CPU at the time) and- unlike NextStep and Windows NT) was a single-user OS with hardly any security (like Windows 9x.) Apple and Microsoft correctly decided that it was a better plan to take a robust, multi-user platform and turn it into a consumer OS rather than try to completely re-engineer a weak, flimsy single-user OS. Thus, Apple turned the (Unix-like) NextStep into OS X, and Microsoft took Windows NT and turned it into XP. They probably could have made Windows 2000 a consumer release had hardware driver support been better.

BeOS boots fast because as far as providing services and APIs for applications, it doesn’t really do all that much. If Windows 95/98 were able to boot on modern platforms they’d come up pretty quickly too.

IBM Mainframes use the Z processor (z13 currently), although I think IBM puts all of it’s CPU’s under the POWER umbrella even though the z cpu is completely different than the POWERx cpu’s.

IBM Midrange (i series (as400) and p series (Unix, rs/6000)) both use POWER processors like POWER9, POWER10

POWER also has been open-sourced (? open something) - Google is using them in production and some other companies are working on systems for the cloud.

I still miss Itanium. Those things were bawss.

Given that they will still be made thru July 2021, this is a bit premature.

a bawss repeat of all of the mistakes of the RISC era?

Which mistakes were those? The ideas were sound and have been incorporated into cpu’s in general.

relying too much on the compiler to optimize code ahead of time instead of letting the CPU do it at run time. the idea behind RISC was “simplify the instruction set, use a simpler in-order CPU, run the CPU really fast, and let the compiler optimize code.” The idea behind EPIC (Itanium) was “bundle multiple instructions together, feed them into a fast in-order CPU, and let the compiler optimize instruction bundling for the available execution units.”

both approaches work fine for workloads like scientific/HPC where the CPU is spending most of its time crunching through large datasets running predictable calculations over and over. But general purpose workloads aren’t very predictable; stuff like games, web browser engines, etc. can get very “branchy” and if the compiler “guesses wrong” on a RISC CPU it can kill performance thanks to “bubbles” in the pipeline; i.e. the next instruction can’t execute because it’s waiting on a dependency, so its place has to be filled with NOPs (no operation, literally “do nothing”) until the dependency is met and the instruction can be executed. Similarly, on EPIC if the compiler guesses wrong, that means the instruction bundles have to be padded with NOPs just to assemble them.

The consumer PC market was fine with settling on a rather ugly architecture (x86) because Intel and AMD poured billions into making x86 CPUs fast by letting them re-order instructions on the fly (among other things) to keep their pipelines as full as possible.

(yes, I know ARM has exploded over the past decade thanks to smartphones, but the ARM architecture has grown so much and become more complex with the move to out-of-order execution, superscalar pipelining, etc. that it doesn’t really resemble the original RISC concept anymore.)

Itanium had some flawed assumptions, but RISC ideas are generally used (simpler instructions, separate memory operations, pipeline, etc.). Intel x86 uses RISC at the lowest level due to advantages, while preserving CISC like instructions (for backward compatibility) that get translated.

I think you’re comment is more correct for Itanium than RISC in general.

I wonder if this is collectible. I don’t think it’s ever been worn.
Imgur