One trick I not mentioned yet is, uh, quality binning I think? Supplier makes a bunch of parts using its standard process, then tests, and separates parts out by how well they test. Easiest example is resistors. Build a bunch of 100 ohm nominal resistors, their exact value will vary because statistics. When you test them, resistors between 99.9 and 100.1 ohms get sold as 0.1% accuracy, 99 to 101 ohms sold as 1%, 95-105 as 5%. So the auto industry could be using the same basic part as everyone else, but they need and pay for the 0.1% level binning. Hard to scale that up if for every 0.1% part you need someone else to buy 50 of the 5% parts. For one project I got to compare a supplier’s product sample to what they sold us when we started buying in bulk. The samples were about 3 times better, but it didn’t matter since we didn’t need or use that extra quality.
Different project, was for test equipment for a company in the auto supply chain. Not a car manufacturer, we sold to a car sub, or sub sub contractor. Somewhat custom equipment, teeny tiny volume, each piece had to be customized identically. For a product with lots of circuitry that would be used for decades. We only convinced the customer to upgrade sometime after “last time buy” of important chips. Three to five years can be FAST for some auto-supply chain changes. Oh yeah, some CPU chips have write-once memory sections, for bootloaders or security code. Bought a used one of those? No good, can’t reprogram.
One last “case study”. Military had a preference at one time for COTS (commercial off the shelf). It kinda isn’t. Might look the same, but I’ve seen the aftermath of a few of those contracts. Extra manual sections for repair, possibly down to components on circuit board level. COTS but “ oh make it pass a few more tests”. Etc. If auto companies have any similar “almost COTS but extra requirements”, recycled parts would get downchecked there, or even new look the same but not as tested.
But isn’t that the point? That due to safety, the chips for the auto industry need a more or less zero percent fail rate? And part and parcel of what is ailing the auto industry with the allocations of said chips?
Extra testing, near zero fail rates, less money per unit for the mercenary foundries to produce these in anything resembling a hurry, especially given the lead times and the ultimate fault of the auto manufacturers to recover their chip orders “just in time” while correctly assessing demand coming out of the pandemic?
I mean, it is what it is, but don’t cry for me, Argentina, when it comes to global auto manufacturers.
That won’t happen for foundries. Foundries are very expensive. Even car companies have reduced the number of assembly locations to reduce costs (all the lines in Australia closed to move to larger plants elsewhere), and Foundries are more expensive than vehicle production lines.
The whole car industry put together wouldn’t be big enough to justify the construction of one extra foundry.
You mean it’s the only industry you work with that has a zero-tolerance policy for defects? Because if the auto industry has such a policy, then I’d be surprised if the aviation industry doesn’t also.
You’d think so. But some years back we were selling some EDA software, and I got a test netlist from a pacemaker manufacturer. It was a total mess, and I swore that I had better never have a heart attack.
Zero defects is a nice slogan but is never going to happen. Giving up performance and area they way they do for chips built for space probes (and nuclear weapons) is one way. The best way is to build a system that can tolerate a failing chip without failing.
The chips I once worked on wouldn’t kill anyone if they failed, but they could lead to our customers losing millions of dollars. An hour. Things failed, but the system saved us.
AIUI, the space industry uses both approaches: the chips are more robust, rad-hardened, and also typically redundant, so that one failed chip can be overruled by two or more other healthy chips.
Good video here on random bit flips induced by radiation (typically cosmic rays):
You don’t have to be in space. The CPU boards on our servers had a PROM loaded with among other things, the altitude of the system. When we had a problem with some memory chips, there was a correlation with fail rate and altitude.
We’d also bring test chips that represented new processes to Los Alamos where you can buy time to zap them with a beam of some kind of radiation. We measured fail rates during and after the zapping.
The idea isn’t 100% crazy, but you had better make extremely robust software if you’re going to do that. Which they clearly didn’t do. Probably the best idea is to have firmware interlocks instead–i.e., on a microcontroller. Then they can implement the required (potentially complicated) logic without exposing themselves to dumb bugs in the high-level software.
As a less consequential example, my graphics board has a software fan controller, but it’s implemented on a microcontroller. Even if the host system is totally bricked, the board can’t destroy itself by overheating, since the controller keeps functioning.
Ford use the Freescale (was part of Motorola) PowerPC because its a CPU with lots of serial IO support built in… oh well I guess they could get an ARM cpu running as the EEC , but they used powerpc first and stick with it ?
You are right about Ford. I actually remember exactly when they announced that tie up. Early 90’s we were in the market for some new servers and were talking to Data General (remember them?) about machines using the Motorola 88k. Just at the time where there was some question about whether the 88k would succeed in the market, Ford announced they were going to use it for EECs. The 88k was a progenitor to the Power-PC (along with IBM’s RISC), and the tie in with Motorola flowed on.
However, in general, the car companies rely on the auto electronics companies for controllers.
Aside: Freescale is now part of NXP, which itself was the spinoff of Philips’ semiconductor business.
“The Soul of a New Machine” was essentially required reading if you were in the tech industry in the early 80s. Also, our old Calma IC layout stations ran on a DG machine - lots of wire wrap under the hood.
I got one of the DG microprogramming guys on a panel I ran at a microprogramming workshop on Cape Cod. A bunch of others came also. They were distinguished by being the ones looking under Coke bottlecaps for prizes.
Great book. I was in grad school doing microcode research at the time, and seeing my field on the front page of the NY Times Book Review was awesome.