How can the Apple cube computer have no fan?

But that, IIRC, was a battery problem, having nothing to do with the heat generated from the processor. Apple has made much of the difference in heat generated by the PPC chip as opposed to the Pentium, going back to an early PPC promo that demonstrated roasting hot dogs over an un-cooled Pentium. DEC did the same sort of thing with the Alpha, including having Alpha-based machines at Comdex right after their introduction with plexiglas sides with a handhole cut out of them so that attendees could stick their hand inside and feel how cool the chips were.

Actually, I think the really big problem was all them damned hot coals.

I don’t know what incident you’re referring to, but I suspect you mean Hitachi (the electronics company), not Hibachi (the miniature barbecue).

I’m having visions of a laptop you bolt to your balcony railing and invite friends over to use…

smacks self in head

“Read the whole thread before you quote, dope!”

Digging up the old hijack: There is radiative heat in space, just no convection or conduction. At the minimum, you’ve got the microwave background radiation (2.7 Kelvins, not 2.3, btw), but we’re right next door to a star. Without an atmosphere to moderate things, the sunward side of anything in, say, Earth orbit will get very hot, wheras the dark side temperatures will plummet, reaching the ambiant 2.7 Kelvins if they’re left in the dark long enough and the conductivity is low enough. There’s a few possible solutions to this: First, you can just design your sattelite to tolerate these extreme temperatures-- I think that Gravity Probe B, with no instruments onboard except a laser rangefinder, might use this method. Second, you can include a supply of cryogenic materials on board with launch, to keep the sattelite cool for a while. This was the method used by the WIRE mission, and it was a leakage of the cryogenic material that scrapped the mission. The third possibility is to rotate your sattelite fast enough that neither side gets too hot while it’s in the sun-- I think that this is (approximately) what the Shuttle does. The fourth method, and the most commonly used, is to just have some big honkin’ heat sinks on the shaded side of the sattelite, and make sure that it’s conductive enough to keep the Sunward side cool, too.

We now return you to your regularly scheduled thread, already in progress.

Yes, and that’s an example of a space qualified microprocessor. By PC I meant computers with standard architectures, such as Macs or Wintel (“IBM compatible”) machines. Such PCs are sometimes used aboard the Shuttle to control experiments, but never in mission-critical applications. When the space-qualified PowerPC chips become available they will be incorporated into custom-designed circuits, not Mac motherboards.

Umm…I owned an SE. And before I owned my own, I was a student at a college that installed some of the first SEs, the ones that had two floppy drives and still no hard drive.

They had fans. The ones BEFORE the SE, the Mac Plus and 512Ke and 512 and the original 128–THOSE had no fans.

Anyway, to return to the original subject…the G4 is an inherently simple chip, aside from the AltiVec unit. It has a short pipeline, it doesn’t do a lot of spec ex and doesn’t do a whole hell of a lot of OOE (although it does a bit of both of these) and the instrux it passes on to AltiVec are handled elegantly with no mode-switching. It is a cool, efficient, almost stripped-down chip, although it probably is too complex to pass for pure RISC. The Pentium III and the competing Athlon have to deal with the need to process native x86 instructions and therefore have a much longer pipeline and lots more emphasis on spec ex and OOE because the price in speed of goofing up in the guesswork regarding what will be needed when is a lot higher if you have a longer pipeline. Why a longer pipeline? Because those x86 instructions have to be busted up into RISC-like strands of equal proportions, which take up several steps. The Pentium III and the Athlon, rather than being genuine inheritors of the 486, are essentially 486 emulators in RISC-ish hardware.

The astonishing thing is that the Intel architecture is not getting its butt kicked and is instead causing the PPC architecture to be pressed to the wall to keep up. Some of this is economy of scale (Intel is not happy eating dust and has the bucks to throw at R&D). Some of this is awkwardness between Motorola, Apple, and IBM – IBM is working on what would essentially be G4 without AltiVec but at a much faster clock speed; Motorola is trying to get the hang of cranking up the clock speed on the G4 with AltiVec; and Apple has opted for the Motorola version under the G4 name but may seriously consider the IBM version under a different trademark, perhaps for PowerBooks and iMacs.

Intel, meanwhile, would like to see you PC users accept software-emulation-level performance on legacy applications (and even operating systems) long enough to get you onto a post-x86 processor that doesn’t have to deal with variable-length instructions and legacy gobbledegook. They know they can get folks to make servers using the new 64 or 128 bit processors but they’d really like to see the average PC go that route too. This is a tough sell as long as someone, anyone, is willing and able to tweak a processor that will directly crunch an x86 instruction set a bit faster, and the money continues to flow in that direction, including Intel money.

Meanwhile, if Motorola figures out how to fabricate a reliable 1.5 GH+ G4, it could be in reaching distance of the PPC dream: to be able to emulate the x86 instruction set via software executing on a fast PPC system as fast or faster than it can be run on a complex decode-and-guess-and-execute CRISCoid chip like the Athlon or PIII, while running native apps considerably faster. The idea there is that it might be possible to zing some simpler instructions much faster on an uncomplicated chip architecture than to slam-dunk a constant stream of messy instructions that must be translated and guessed-at first by a far more complicated chip architecture that eats more electricity and whatnot in the process.

So far Intel has frustrated the PPC consortium and has refused to fall behind except on a performance-per-MHz or a performance-per-$ level.

AHunter3:

Sorry, I must be remembering the SE30s. I definitely remember that we had a bunch of them with the original Mac form factor, with 20M disks and they were fan-less, but they did tend to get hot on top. As I remember, they had slightly different venting that was, perhaps a less effective convection scheme.

I guess it depends on your benchmarks, but if the BYTEMark is any indicator, this has already happened. Similarly clocked machines, one a Pentium II based PC - the other a G3 Mac running SoftWindows, were tested against the BYTEMark and the Mac eeked slightly ahead. Of course, you might argue that SoftWindows cheats, because rather than use the stock Wintel I/O drivers, SoftWindows substitutes native Mac drivers to increase Disk I/O performance and screen drawing. These are not really a function of the processor, though, so I tend to think that it is a fair test.

Yeah, I know…the BYTEmark is for some reason regarded as suspect by Wintel partisans, and other benchmarks such as WinBench won’t run on VirtualPC or SoftWindows because they do some testing to make sure the results were honestly achieved…VPC and SW don’t cheat, but the emulated clock speed doesn’t “stay put” the way a hardware computer’s CPU does. To observe this, run a system info PC application of the sort that reports CPU MHz and then run it again, and you’ll see the results bounce around a bit. Oh, it’s a 167 MHz Pentium…no, it’s a 203 MHz Pentium…eww, a 99 Mhz Pentium? Ooh, a 313 Mhz Pentium! Anyway…

We could claim to “do PC” better than a real PC can, at identical speed in MHz, but a PC afficionado who could afford a G4 is not usually comparing it to a 500 MHz Pentium III box. The eclipse happens when the fastest PPC machines can execute x86 instructions via software emulator essentially as fast as the FASTEST natively x86-compatible machines available can, a scenario we always assumed required the x86 architecture to hit a speed wall it couldn’t surpass first (i.e., that to emulate the x86 instruction set in hardware would end up being less practical and efficient than to run a software emulator’s native code fast enough to attain comparable speed).

AHunter3:

Which is ironic since it was invented by a bunch of Wintel weenies in the first place. They were perfectly happy with it as long as it showed the Intel machines as being faster than 68K Macs, but as soon as the PowerPC Macs started to outpace the Wintel machines, they decided it was time for a new benchmark…

I hated to admit it when the Mac was getting beat by the BYTEMark, but it is a pretty reasonable benchmark. It tries to account for the way many people use computers and is a test of overall, pratical performance. It’s infinitely more meaningful than the Sieve of Eratosthenes…

I’ve yet to see a benchmark that captures an overall productivity metric. This would be an objective oriented benchmark that would capture OS ideosyncrasies that thwart productivity, crashes, software installation and de-installation, user-friendly multi-tasking, number and complexity of mouse and keyboard actions, etc… Of course, there are certain objectives that the PC couldn’t achieve at all, for instance shrink-to-fit printing (in applications that don’t support it). There are probably some Mac counter examples, though I can’t think of any off the top of my head.

Interesting, I’ll have to check that out. Can you recommend a freeware tool that can give me this info?

I’m not sure I agree. First, there’s an argument that says that the PowerPC architecture gives an extra multiplicative effect. One that was not as noticable at lower operating frequencies, but becomes more pronounced as the speed increases. According to this theory, a 2X increase in processor speed for a PowerPC might translate into a 2.5X increase in performance. The x86 is trapped on the 2X-for-2X performance line (Assuming both processors are maintaining similar conventional Moore cycles). If this is true, then the game is virtually over, because the PowerPCs are now approaching the same clock speeds of the fastest Pentium/Athlon/Celeron/etc…

Also, even if you discount the ‘extra’ multiplier for the PowerPC, the x86 doesn’t have to hit a wall to fall behind, it merely has to (as a friend of mine put it) hit a pillow. A couple of Moore cycles with only a 1.5X performance improvement would make it unlikely that an x86 would ever catch back up to the PowerPC.

Regarding NASA, there’s “space qualified” and “space” qualified. Depends on what you want to do.

The crew does use laptops in the cabin environment. That’s a shirt-sleeve environment, with regular temperatures and pressures. That takes one level of qualification - meeting one level of requirements.

Then there’s processors used in external systems - satellite funcitions, payload bay payloads, in the new EVA display panels (computer panels on their suits for checklists and other data), etc. Those have completely different requirements.

Scr4 does a good explanation of cooling things on satellites. You get heat build up from the computer chips and circuits, and from sitting in the sun. Cooling comes from sitting in the shade and by big radiators. Sizing the radiators is important. Spinning helps to alternately change from sun to shade, moderating temperatures. Low earth orbit has a 90 min cycle for Shuttle orbits - every 45 mins is a change from light to dark or dark to light. This moderates from the extremes, keeping temps between, oh, -100 deg F and +300 deg F (depending on the particular hardware, metal of the frame, exposure, blah blah blah).

For ISS, things like the batteries have large plates with fins that interface between the box and the cooling system. The cooling system uses ammonia (on the external parts) as the working fluid and pumps heat to the radiators. At one time they considered heat pipe radiators (may still eventually go that way in the future), but for now they use traditional flow through radiators. They are a scissor design for launch stowage and deploy on orbit. How do the radiators work?

You have two competing systems. First is the solar panels - big sheets that you want facing the sun as much as possible. Then you have radiators, panels you want facing away from the sun so they don’t have sunlight hitting them. So what do they do? Mount at right angles. The radiators trail behind the station on gimbals to stay edge-on to the sun, and radiate to the sides. The solar panels rotate on separate gimbals to stay sun facing.

Okay, hijack over.