How does Apple cool their computers?

As far as I know, enormous and heat-pipe (but not multi-fans) are standard features of modern stock heatsinks shipped with CPUs. I’m still using a relativly ancient Phenom II CPU with the stock heatsink and it is enormous and heat-pipey. (I remember my succession of cooling technology–my 486 had no cooling system at all–my Pentium had a tiny heat-sink glued on, fins maybe a quarter inch high. Then fans and bigger fins were added, then even bigger fans and even bigger fins, then heat pipes…)

Another thing, do we know yet how quiet the iMac Pro is at full power? You can get away with a small heatsink & fan if you run the fan really fast, but that creates a lot of noise. This is why laptops and mini-desktop computers tend to be so noisy.

The iMac Pro is due end of the year. There is a great deal not known about it. Price for one. Even the selection of processors is unclear. The mooted top end 18 core i9 is vapourware as well. They have a lot of time to change their minds.

I would buy one if I had the money. I have need of the compute power, and like working in the MacOS environment.

Intel OEM heatsinks are actually quite small. I was surprised when I bought my first ever retail-packaged i7 processor, came in the intel box, heatsink, stickers. It was a lot smaller than then aftermarket one that I had purchased. So, that “PC” heatsink is not OEM by any means.

I did a quick look on the Intel ARK site, I was looking for like a duty cycle for max processor usage given the included heatsink design and anticipated case flow.

That being said, I do not think intel was thinking that with the provided heatsink that the processor would be at 100% CPU across all cores for sustained periods of time. Probably like bursts here and there for 5 minutes or so. And… every gamer knows this. Which is why we buy enormous heatsinks, have heatsinks on north/south bridge chipsets, and even the RAM. Since if you plan on gaming for four hours at full-tilt there is no way the stock config will work.

As for Apple, I also don’t think they are anticipating 100% CPU of all 18 cores. It’s a desktop all-in-one, not a server in a render farm. Like, why on earth would you buy an all-in-one for that? It’s got 18 cores for those who might do video production and need all those to have all the apps up and running and not crawling as you switch through. As someone else stated, this thing will throttle down hard if you max out CPU usage on all 18 cores. Probably get really loud before then too.

I did find a 18 core that was 115W, the graphics card will also be curtailed like what’d you find in a laptop I imagine, which in my nVidia experience can be like 50% of the TDP as its that throttled.

So, a case-specific heatsink that had actual thought behind the design, a more than likely throttled processor, and a mobile variant of the GPU… that’s a lot of money to spend on a laptop with a stand that is in a vertical position.

that’s partly because Intel and AMD have no idea what kind of case you’re using nor the airflow through it. system manufacturers e.g. Dell and Apple can lay out their systems so the CPU/GPU cooling can take advantage of directed airflow and escape the need for dedicated fans or enormous heatsinks.

Would Apple’s control over the OS also have a role in this? I’ve owned Macs and PCs, and in my experience Apple products seem to function well even when they have lower specs than PC alternatives. From what I understand this comes down to the fact Apple can customize all facets of the product – that is, it can maximize the efficiency of the hardware via software because it designs both (in contrast to most PCs).

Nobody knows how the iMac Pro will handle high loads. The 2014 iMac 27 would thermally throttle under high loads, but this was improved for the 2015 model which does not: https://photos.smugmug.com/photos/i-NKkjs7T/0/33f8c0f2/XL/i-NKkjs7T-XL.jpg

Table is from 2014 vs 2015 iMac thermal test: https://www.youtube.com/watch?v=PErtLOvMcM0

I spend hours ever day editing and transcoding 4k video on a 2015 iMac 27. Right now it is running a 24-hr transcoding job, and it’s not excessively loud or thermally throttling.

What Francis Vaughn said is absolutely correct. It isn’t just the iMac where you see great investment in thermal design. This Cray server blade may not look impressive, but it dissipates 2,400 thermal watts: http://www.hpc-ch.org/wp-content/uploads/2015/09/MeteoSwiss_CS-Storm_blade_full.jpg

You can’t do that from parts you buy on NewEgg. The heat sinks and thermal engineering are custom designed. You wouldn’t guess that from just looking at it, just like you wouldn’t guess an iMac Pro (which looks superficially like past models) has greatly improved thermal management.

Many beige box PC makers are still using the ATX form factor from 1995. It’s not worth it for them to run elaborate CFD and thermal modeling codes and source entirely new component manufacturing lines. Apple (like designers of high-end server blades) can afford that, so they do. We won’t know how well the improved iMac Pro thermal design works under high stress until December. That will likely be one of first things serious professional users try.

Hrmm… thats an actual quad-core proc too. TDP maxes out at 95W so that isn’t bad, but, would be less if the program only uses less than all four. Simply OOC, can you check and see if all four cores are pegged, I’d be surprised as then it would be borderline unusable? That model can also have a really power hungry Radeon M395.

The trick is, to find out something that will use all of what the CPU/GPU have, which is usually unlikely at the same time. Even for a game, as you have four cores, eight threads on hand. You’d have to try one of those “burn-in” tests.

Also, not at all surprised that blade has a 2,400W TDP as a whole. GPU’s are ducted in a push/pull fan config in what looks like a 2U chassis. Also, probably screams at max TDP.

:smiley:

Another vote for a most excellent takedown.

Who’s using liquid cooling other than overclockers, or people that want to look cool?

You cannot compare rack mounted devices designed to go into a data center with carefully designed air handling. Plus, any data center is loud. Apple has to design these things to be quiet and not burn a hand. And many Apple devices get very close to burning hands! As well as roasting the interiors and destroying video cards. Apple has had multiple recalls/warranty extensions due to failing components from heat.

This looks pretty cool.

Anyone running a AMD FX-9590 Vishera 8-Core :frowning:

I just cannot see how 18 cores is useful for anyone but people who plan to have high load most of the time. Is the idea that you have only a few working at a time, and then switch off? I wouldn’t think that would work for heat throttling, unless the cores aren’t right next to each other.

I do know that the Xbox One X is experimenting with vapor-based cooling systems, which is allowing them to make a much more powerful system in a much smaller form factor than the original. Is Apple using the same tech?

Granted, my CPU is overclocked but my main reason for moving from air to water was to decrease noise from all the fans (by eliminating them, naturally).

18 cores can be useful in shorter bursts. The whole problem with an interactive system is that the time you wait is time you are not productive. An 18 core device has the potential to slash compute times for a whole raft of tasks that might only be relatively short - but dropping a five minute task to one minute is something a lot of people would pay for. Especially if it is a common task.
I do a lot of serious mathematical compute - many FFTs, much matrix work, and a lot of data compute intensive but highly parallelisable algorithms. However, the runs are typically only a few minutes to ten odd minutes. An 18 core would be really nice.

But doing things like major rebuilds of the code base can flatten every core for ages, and that will almost certainly get me into heat throttling. I don’t know if the make system will actually let it run to 18 parallel compiles, I sort of doubt it. Maybe with some tweaking.

The clock speed boost option of these chips does indeed distribute the load onto cores that are not adjacent. You don’t get all that huge a speedup, but it is useful for some. There are applications that are best served by running on a single core and shutting down all the other cores on the chip. There are supercomputer systems that are filled with high end Xeon chips that only run single core for just this reason. (Dedicated access to the entire chip’s L3 cache is a big win as well here.) Wasteful in a way, but you do what you need to do.

The 18 core chip has come in for a bit of questioning, many regard it as Intel’s vapourware repost to AMD’s 16 core Thread Ripper. Which it almost certainly is. No matter. Apple have claimed a design that will cope with the energy dissipation, and we hope they will deliver.

The task I’m most familiar with that can fully utilize a multi-core CPU for a long time is rendering CG. In CG I’ve created, even relatively simple looking frames can take half an hour or more to render, fully utilizing my 4-core CPU thanks to the computation-intensive algorithms needed for realistic lighting. Pixar-level complexity and lighting takes hours per frame on more powerful CPUs than mine. But having to throttle back CPUs already operating at a pathetically low clock speed kind of defeats the purpose of having it in the first place.

Other programs that I use that are embarrassingly parallel, or close enough to it, include an OCR program that OCRs each page of the document in parallel, with as many threads as you have cores, and the ebook management program Calibre, which–when you are batch-converting books–creates paralel threads for as many cores as you have. But you typically only run those programs for brief periods, and even when you do, you might be better off with a CPU with fewer cores and a higher clock speed (I’m not asking for much–just a 16 core chip that runs at 5ish GHz and inside of 100 watts!)

See, that doesn’t make all that much sense to me unless you are doing that same five minute operation over and over. And then you get the same overheating problem.

I also don’t expect what you are describing to be a common use case. The main thing I’d assume an actual consumer would want from a whole bunch of cores would be some sort of video rendering, probably CG-based.

I mean, that is the main market for actual work being done on Macs–artistic stuff.

Guess you’ve never been around a research university? :smack: