Can you use silver as a substrate when plating copper with gold?

Subject says it all really, but a bit of background:

I was reading about copper CPU coolers being plated with nickel to prevent tarnishing and got to thinking about the thermal conductivity issues, with nickel being so much less conductive than copper. So why not use gold instead? Now, you can’t plate gold directly onto copper as the copper will bleed through. You need a substrate to prevent this, like nickel… However, you can plate silver onto copper, but silver tarnishes. I know you can plate gold onto silver, but can you successfully and usefully plate gold onto silver-plated copper?

I think the answer is yes, but the question really doesn’t matter in so far as you need to take into account the depth of the metal when worrying about the conductivity issue. A thin layer of nickel plated on the heatsink will make almost no difference to the effectiveness of the heatsink. This is especially true when you realise that although nickel has about four times the thermal resistance as copper, silver or gold, the thermal grease used on the interface is between 5 and 100 times worse than nickel. Which is why it is so important to use as thin a layer of grease as possible, and yet one still sees great gobs of grease on most heatsinks.

The best thing you can do to improve things is to lap the mating surfaces dead flat, and use the absolute minimum of thermal grease possible.

You will need a surface chemist for the most authoritative answer, but it seems very likely the proposed benefit doesn’t merit the cost. Or, you know, someone would have done it already, at least for something wild, like an aerospace application.

Most high-performance coolers use copper heat pipes swaged into aluminum fins. The only exposed copper is where the cooler contacts the CPU, and this is protected from corrosion by the thermal paste. So, it’s not really an issue.

because the point of the thermal management system is to keep the IC’s temperatures within specification, not as cold as possible.

I think most people would insert ‘with as little noise as possible.’ Better thermal conductivity means a greater transfer of heat which means the fan doesn’t have to rotate so fast which means it’s quieter.

not so much; heat transfer to the surroundings is far more influenced by the surface area of the heat sink and the air flow along the heat transfer surfaces. A bigger heatsink with a bigger, slower fan can enable lower noise. Worrying about the thermal conductivity of the layer at the CPU-heatsink interface is silly in the face of the bulk thermal conductivity of the heatsink itself.

Whilst in the abstract this is clearly true, in practice, with a typical semiconductor CPU, you do want to get as much heat out as possible. There is no practical way of getting the system cold enough that it drops below useful operating temperatures, and there is a slight but useful advantage in getting the CPU colder. However it is only slight, unless you are in the insane overclocking region, where even cryogenics becomes within scope. Silicon still works this cold.

In the past there was a general rule that held that keeping computers as a whole as cold as possible increased their reliability - and I certainly subscribed to that - and I know service guys that did too. Google published some studies that contradicted this, although there was enough room to still argue. One argument is that it is thermal cycling that causes trouble. However these arguments are for whole systems, and it is usually sockets and solder joints and capacitors that cause trouble, not the actual semiconductor integrated circuits. However if there is a manufacturing problem you can get silicon that get into all sorts of trouble with temperature. I’ve been hit by that before. That one was seriously costly for the vendor.

Getting the heat out is always critical, and proper computers are designed very carefully to do this. A good example of the big slow fans and huge heatsink area was the Apple G5 tower. They were lovely machines in their time.

One argument for gold on the CPU and heatsink interface might be that a sufficiently thick layer of pure gold may be malleable enough that with enough pressure no interface grease would be needed. But again, the wafer thin layer has only a tiny effect relative to the thermal resistance of the entire system.

How much gold would you need? Gold is, of course, fantastically expensive. And it’s dense too, so a 1-2mm depth would cost about $200

You wouldn’t need anything like that thickness. However gold is probably not the right answer. A bit of a look around and it looks as if Indium is a good bet. Indeed you can buy indium foils to use for heat transfer on interfaces. Indium is a lot more malleable than gold, and although as good a conductor, is still much better than most. Gold, although malleable would appear to require much too much pressure to usefully deform to close the surface roughness. (It does get used, but in roles where serious mechanical pressure is possible. The usual heatsink clips on a CPU are not going to work. Indeed you would probably punch the cpu out of the other side of its carrier before you get anything like enough pressure.)

Heh. I see that Wikipedia says it’s already in use for that.

Unless there was a very good reason not to use it, diamond-filled heatsink “grease” is going to do a better job than Gold, Indium, Platinum or other exotic and expensive materials.

Impressive stuff. However the Indium Heat Spring is pretty much the same to slightly better performance. I would probably just use the diamond filled grease. But the indium is pretty remarkable.

It’s a little pricy, but not too bad.
But, not being available from DigiKey or Mouser is a big negative for them.

This is a curiously tangled issue, elucidated upon by Egbert B. Gebstadter, who’s also known as ‘the Egbert B. Gebstadter of indirect self-reference’, who describes it in his best-selling work Copper, Silver, Gold – An Indestructible Metallic Alloy as ‘a curiously tangled issue’.