Did this happen (80's computers)

I’m a manufacturing engineer and I have worked in a few industries. This sort of thing is very common and not just for chips. As others have said, cheaper versions of a product are commonly the premium version that are defective in a way that they will still work in a way that is useful to someone. If there is a big order for the cheaper version that has to be met, better to ship some premium units than miss a shipment.

Sometimes, thanks to us manufacturing engineers, yields will get so good that we don’t have enough of the crappy ones to go around so almost everyone gets the good version. To prevent shenanigans from some of our more clever clients buying the low grade version and the ire of other customers buying the premium version, we have to intentionally cripple some units.

I used to work in the disk drive industry. In my day, drives commonly had three disks and therefore six heads. Lower capacity drives would be ones where testing showed bad heads or disks. Mechanically, they were identical. Some of the lower capacity drives had some clipped wires.

Anecdotal story of intentionally disabled features:

Back in the VCR days, companies sold a variety of models with varying levels of features. After our high-end (JC Penney) VCR failed we replaced it with a basic model from the same store, but kept the old remote control. We discovered the cheapo version had all the features of the high-end model when we used the older remote. The cheapo version was simply sold with a different (more limited) remote control.

I’ve seen a similar thing when disassembling consumer electronics - the circuit board behind the front fascia often has additional buttons that aren’t expressed on the front plastic.

I just wanted to highlight something that is often overlooked on these discussions: yield rates on CPUs are often horrible. It is not uncommon to have a 50% or higher failure rate off of wafer. It is amazing how much waste is created in the process of providing a stable end product.

This has been done for decades and is still done today. An example is CPUs that are tested at different speeds. The ones that fail at 3.4 GHz but pass at 3.0 GHz are sold as 3.0 GHz CPUs at a lower price. But after a few months of production yields improve to the point that nearly all CPUs pass at 3.4 GHz. So some 3.4 GHz CPUs are locked at 3.0 GHz (by burning out the higher multiplier) and sold at a lower price.

Another example was a few years back when NVidia needed to compete in a specific price class since ATI was dominating it. They didn’t have a product at that price so they disabled 1 out of 4 graphics cores in software and sold it at a lower price. Internet forums had step-by-step instructions on how to re-flash the graphics card with an unlocked software version and you got the last core back.

Downgrading is all the rage. Many of the products found at big box stores are downgraded in some way. For many items it’s an actual cost saving for the manufacturer, as an example if you buy a lawn mower at Home Depot there may be no oil plug on the bottom. You’ll see the location molded into the pan, but it hasn’t been drilled or tapped. In other cases where they can they’ll retrofit to downgrade the product just so they aren’t giving away something for nothing. I assume design engineers now consider the downgrade options in all products.

As for the floppies, it was common for people to punch the double sided disks so they could be flipped over and used on both side in a single sided drive. Then the users would find a high error rate because the instead of head pressing on the other side of the disk there was a felt pressure pad rubbing dirt and dust against the material and chewing up your data. I do recall one manufacturer selling a double-sided drive in a cheaper version with just one side enabled from a jumper on the controller.

I’m reminded of a movie, I think called Moonlighting, about some immigrant workers in England trying to survive when their source of money had been cut off. Their leader managed to buy a used TV, but after arguing down the seller to what little he could afford, but then the seller pulled off the cord. Some people just won’t ever settle for what they consider a bad deal, if you want a lower price you have to lose something from the product.

The company I dealt with often in the early 80’s scared the piss out of IBM by leasing an Amdahl mainframe to replace their IBM 370. (When they came back to the fold, years later, apparently they got a much sweeter deal from IBM).

Amdahl was an engineer in mainframe design who started his own company to compete. Leasing mainframes was an incredibly lucrative business for IBM, especially the horrendous support costs. Amdahl’s company could not produce all the models with ranges of speeds that IBM could, so - according to the senior systems analyst of this company - the models were all essentially the same model. If you wanted a cheaper machine, they actually inserted a board in the CPU that stole some extra clock cycles to do nothing, thus slowing the machine. After all, the main cost wasn’t in producing the machine, it was the support and research costs associated with the whole thing.


Yeah, there were plenty of instances where the processor chips were identical in PC’s but labelled differently. As mentioned, people would overclock earlier PCs. A 8088 or 80286 . 386 /486 might be labelled as 16MHz but actually able to run much faster. The warning was that some were simply labelled as slower because they failed when boosted to th higher speeds in testing. But sometimes, the demand was higher for the cheaper solwer chips and they just labelled the good ones as slower.


My favourite story was the “software upgrade”. If you had the 386 without the 387, the math was done in software and was significantly slower. My friend actually had a copy of a fax ad “turn your 386 into a 486”. It supposedly sped up games immensely. Reading between the lines, what it did was replace the math libraries (.DLL files) with ones that ran faster by calculating a lot fewer significant digits. For most people at the time playing video games (Doom?), floating point to 4 digits accuracy for a 600x800 display was more than accurate enough. Since most algorithms were more than N, usually NlogN or N^2, half the digits accuracy would mean one quarter the time or so to do calculations. With screen displays requiring millions such calculations per second, this made a big difference. Just hope you weren’t using this to calculate your bridge or building design or large budget.

From a reliable source, I heard the following story. His lab had an IBM mainframe (this was in the 60s) with a card reader that read 60 cards/minute. They decided to upgrade to 90. The IBM technician came in a removed a cam that disabled one read in 3. When they expressed surprise, he pointed out that the machine wore out faster if every cycle was enabled. All IBM equipment was rented in those days, so this made some sense.

I don’t think there was any difference between the 8086 and 8088 chips except that the latter had only 8 data pins instead of 16.

When the IBM AT came out (1984) it ran at 6 MHz, although the 80284 chip was rated for 8. IBM claimed they were being conservative and this would be more reliable. Whether or not that was true, a small cottage industry grew up replacing the chrystal with one that would run the machine at 8MHz. People did experiment with 10, 12 (and I think there was one magazine report of 14) MHz and it still ran fine.

I’m pretty sure this is the case the OP is talking about. It wasn’t in the 80s but the very early 90s. I wrote a short college paper about it for my Information System 101 class. Intel claimed that it was mostly because of low yields of chips with properly functioning co-processors but everyone in the industry (pre-internet: columnists, magazines etc.) knew this was total bullshit. PCs had suddenly become a consumer product and Intel was just using it as a profitable marketing strategy. It only lasted a year or two before the next gen of chips came out.

The only cases I know of from “the 80s” are the Intel SX/DX chips, which as others noted started out as legitimate repurposing of chips that failed the coprocessor testing and became a deliberate downgrading, and an IBM laser printer. I want to say it was the 1020, maybe the 1080. In full-speed form it printed around 10 pages per minute and cost $2000; the 1020E printed five pages per minute and was just over $1k. The sole difference was the control PROMs, one set of which had wait states inserted to slow the print process and the other which didn’t. There was a minor industry in cloned “non-E” PROMs at the time.

I used to work for a company that designed and built burn-in and test equipment for Intel to test their Pentium and Pentium Pro chips on. THey did exactly what you described above- all the chips were identical, so they’d test them and basically label them in the speed category where they actually worked, according to our engineers.

What’s more, apparently as their processes got better, they’d actually intentionally label chips that could handle say… 200 mhz as 166 mhz chips in order to have the right numbers to sell, if there weren’t enough chips failing the 200 mhz tests.

On a somewhat related note, the presenters on Top Gear sometimes report on a car manufacturer…Porsche, Jaguar, etc… will sell you a stripped-down car for racing, and charging a premium.

They’ll put less time, effort, and material into a product…and charge you more for it.

It’s most certainly not less time and effort, in many cases, to make a much lower volume production run than the standard high volume product. If a line is setup and tooled a certain way, a special will require more labor and unique attention. This is counter intuitive but true.

If the presenters on Top Gear made the definitive claim that the special racing version cost the car company less money, they were speaking from ignorance.

Yep, that’s what we did in the mid-80s to early-90s. Either that, or I just took scissors to them. You didn’t have to particularly accurate with the cut, just make sure not to cut into the magnetic disk itself, and make sure the hole is big enough. I don’t recall any of us having any issues with disk errors and the like using this method at the time.

I heard this warning at the time, but I used hundreds of “single” sided disks as double sided by punching out a notch on the other side and never had data errors on either side of any of them. The concerns about data corruption were very much overblown.

The only reason the 486SX existed was that Intel found a way to sell off defective chips instead of scrapping them. That’s not total bullshit in my book. Sure, they intentionally crippled some perfectly fine 486DX chips to turn them into 486SX chips once the SX started selling like hotcakes, but that’s just business.

Semiconductor yields really are abysmal, especially when you are pushing the limits of manufacturing technology.

I used to repair washing machines in the early 90s, and one particular range had 3 models in it - 800, 1100 and 1300 spin speeds. The sole difference between the 1100 and 1300 spin speed models was a jumper to be cut on the speed control module to slow it down. Oh, and about £50.

We would explain this to customers when replacing that module on 1100 machines, and offer to leave it uncut.

I saw cases where that was the probably cause of read failures. It didn’t happen to me though. I imagine it wasn’t a problem if you didn’t let dirt and dust get all over your floppies in the first place, and in those cases the double sided drive probably had just as many problems.

Low yields come when your are making very large parts at very aggressive process nodes. Big new processors have terrible yields, moderate sized ASICs at old nodes have good yields and greeting card chips probably have great yields.

Most of the examples in this thread are variations of the basic idea to sell the same product to two different sets of customers: those who are willing to pay more, you sell to at a high price, and sell to the price-sensitive customers at a lower price.

You can still make a profit on the lower-priced products, but if you sold to everyone at that lower price, you’d be missing out on a bunch of revenue from potential customers who would be willing to pay more.

Lots of industries have developed clever ways to deal with this. Airlines want to get the most dollars from business travelers who are not very price-sensitive, but they still can make money on personal travelers at cheaper rates. They separate the two by their “Saturday nigh stay” rule - if your round trip includes a Saturday night, the fares are much cheaper. It’s the same seat on the same plane, but their booking rule separates the customers.

Grocery stores do the same thing with coupons. Price-sensitive customers will go to the trouble of getting coupons together, but those who are not price-sensitive won’t go to the trouble.

All these strategies are the same idea: getting the biggest revenue from those willing to pay, while selling cheaper to those who aren’t.

This neglects the fact that we inherently produce imperfect parts. If we can find a market for these parts which may be slower than optimal, or have a few cores broken or need software support for floating point you can turn parts which would be scrapped into cash.
For grocery stores the analogy isn’t coupons, it is day old bread of meat just about to expire. For airlines an analogy would be if some of the fleet were broken and required 20% more time to get to the destination. You’d sell tickets for these planes at a discount. And, if this was in high demand, you might tell the pilot of a perfectly good plane to fly more slowly to meet it.