Are the microchips in cars fundamentally different than the microchips in smartphones and game consoles?

A debate started in this thread about the viability of using recycled chips from PCs, smartphones and other advanced electronics to replace the dependence of car manufacturers on custom built silicon.

So I figured I’d pose the question here.

Recently it’s made headlines that there’s a growing shortage of computer chips for the automotive industry. The big manufacturers have shifted almost all of their production lines to deliver chips to Apple, Samsung and the other major electronics companies at the expense of the auto makers who don’t represent a particularly important market segment in relative terms.

What specifically makes automotive chips so different and so much more demanding than smartphone or PC chips? Setting aside the software and firmware, which can be rewritten and recompiled, why couldn’t a automotive engineer with sufficient time and money re-engineer their platform to use a bunch of last-generation smartphone chips to power their ECUs, PCMs and infotainment systems?

Calculations are calculations. Bandwidth is bandwidth. Silicon is silicon. They stuff comes off the same fabs, so what’s so special about car chips?

Is that the reason? I would have thought that a car computer chip is much less complex than a chip for a laptop or a cellphone, so maybe it’s just a less profitable product when producers are facing excess demand?

Cars are designed 3 years before they go into production . The parts are validated and tested to withstand extreme environments from the heat in Death Valley to Canada artic zones. Crash testing is involved, not a minor thing. They must be reliable because people lives are at risk. Software development takes years and often is built on existing software. Nothing ports over cleanly, ever. A cell phone crashes and no one is hurt. A car crashes and people die and lawsuits are filed.

And that is just the beginning. don’t get be going about goverment regulations.

Right. Your average PC chip doesn’t sit in a hot engine. Plus, if it fails after five years you are probably happy that you have to get a new PC to replace your antiquated one - but other things will fail first. Cars need to last much longer.

But there is a fundamental misunderstanding her of how silicon is designed. Chips in cars have to failover smoothly, and they can’t have the kind of latency we see in PCs. They are real time chips, while most PC chips (and phone chips) don’t have to be.
Plus, in a car every penny counts and you need to have your chips do what you need for them to do, nothing more and nothing less. Standard chips might suck up too much power.
ASICs (Application Specific Integrated Circuits) are chips designed for a specific use. I’d suspect most car chips are ASICs. You can’t buy anyone else’s ASIC and expect them to do what you want - not that anyone would sell it to you.
The conference I’m involved in has an Automotive Chip track, mostly about testing them, driven by semiconductor companies supporting the automotive market and the EDA companies supporting them. I don’t go to those talks since I’ve never worked in a company selling to car companies, but I can look up some of the papers.
The problem is partially a supply shortage and partially a screwup by the car companies, who cut their chip orders expecting the market to crash and then found they were wrong. You can’t fix that right away. When we were pushing a new processor through TSMC, it would take 2 - 3 months from the wafer start to first silicon. These chips are simpler and might take a bit less time, but not much less.

I work for a semiconductor company. Any chip that has to do with the Auto industry Gets special treatment on the manufacturing side.

It’s the only industry that has a 0% defects policy.

Which is insane but necessary.

I obviously should have waited for people who know what they are talking about to come along!

Can you elaborate? What special treatments? Chips are made of cobalt, copper and silicon, what pixie dust do they use to make them more durable?

My understanding is that the care manufacturers stopped buying chips due to the shutdowns, and the increased demand for computers and electronics of all kinds meant that their usual allotted chip fabrication time was bought up. And since every fab is running at basically 100% capacity right now, with no wiggle room, they can’t easily buy time to make more now that demand is going back up.

I’m not saying there are no differences, but I don’t think those differences are the problem. They just lost their slot.

Info from Linus Tech Tips, who asked the people directly involved in the manufacturing side:

I’m not an engineer so I can’t speak to that, but I can tell you that your description of what a chip is, is a gross oversimplification.

As far as the manufacturing goes, Auto chips go through way more testing and they can only be ran on select tools.

And again 0% defects. With any other industry, say like a printer company, it’s given that about 1 to 2% of their chips are going to be defective.

Not so with auto chips.

One key point is that consumer electronic microchips and automotive microchips come out of the same fabs. There is a shortage of fabs. IIRC, rule of thumb is that it takes 2 years and a couple billion dollars to bring a new fab on line.

On the demand side, consumer electronics went up with covid. Work from home and remote learning has meant that PC demand has been at “high season” levels for the past 15 months and won’t abate for some time yet. And, mirroring what was seen with SARS, there are net new car buyers that are no longer taking public transport, plus everyone that’s been quarantined but now needs a car to get to work.

Yes, this is the confusing part. The fabs making the chips are the same. They aren’t fundamentally different products, so the question is why they aren’t more interchangeable.

The cause of the shortage while interesting isn’t the question.

I have no skin in this game, but it seems like @Voyager addressed this upthread:

Made in the same fabrication plant may not necessarily mean made to the same design or the same tolerances, which, if I’m understanding it correctly, is why they aren’t as interchangeable as you think they should be.

Using a ASIC is a choice. They could spec a ASSP or a standard IC. Obviously they won’t be interchangeable on the assembly line but you can design for standard components which adds a lot of flexibility. Is there a reason why choosing an ASIC is essential here?

Any application that does any serious volume is going to use an ASIC for cost reasons. You get exactly the capabilities you need without over-speccing a bunch of stuff.

A car could certainly be designed to use some off-the-shelf part (although probably an off-the-shelf part intended for the automotive market, not something that would run a smartphone), but you’d pay extra for those chips. Car manufacturing is an efficient and relatively low-margin business.

The internal architecture of automotive chips tends to be fundamentally different to what you might call common consumer gear.
Automotive applications are hard real time and safety critical. This means that designs for automotive use contain additional architectural features you don’t see in less critical applications. You see lots of hardware synchronising features. Lots of communicating processors that are coded with interrupt driven co-routining. Hardware mailboxes and the like. On chip you get bus interfaces for a mix of CAN, CAN-FD, FlexRay, LIN and SENT, and again the communication and programming is hard real time.
The software development task is very exacting, and in general it is expected that operation can be described and validated down to the clock cycle.
The actual ISA is often a proprietary design as well. There is no point adding transistors for general purpose computing tasks.
Also there are many very small processors scattered about the vehicle. Body control (all the mundane stuff like door locks, mirrors etc) is managed by a host of tiny processors on various CAN busses.
The higher level car entertainment systems could probably be more general purpose. Indeed by the time you add a web browser interface and GUI control ARM, MIPS, x86 would all make sense. But these architectures are pretty much not seen inside the ECU, ABS, drivetrain, traction control, active safety and other safety critical parts.

I don’t think this is as big a variable as suggested. People regularly overclock their PCs and run at 80-90C without issue. This higher than what you’ll see in a engine compartment and is close to the oil temp in a typical car.

You are not going to able to get the functions needed in cars into standard parts without blowing up the space required. There are a few other factors. Chips are a lot more reliable than boards, so if you put a bunch of standard parts on boards you will have problems. @Grrr knows what they’re talking about in terms of reliability. Plus, you slow things down when you have to interface off a chip.
Not to mention that there is probably lots and lots of IP in these chips.
By ASICs I mean anything application specific. Most things these days are SoCs (systems on a chip) with a processor and a bunch of logic on one chip. I moved from ASICs to processors about 15 years ago so I haven’t kept up.

The heatsink on your PC chip radiates into a relatively cool environment, not that of a car engine.
The chips I worked on were way bigger and hotter than most PC chips. You did burn in by reducing the amount of cooling, not applying heat.

MIPS would be an odd choice for infotainment but thanks for the excellent post.

Hey, we tested our stuff a lot!
Do auto chips get fabbed using less aggressive process nodes for reliability? I worked with people at a national lab, making satellites (read, nuclear weapons) and their fab produced rad hardened chips a few generations behind the state of the art.