Did this happen (80's computers)

I heard an apocryphal story about odd business practices during the '80’s and the first wave of PC’s. Can anyone tell me if 1) this story is true, and 2) if so, who it’s about?

Basically, a company was producing nice computers and selling them for luxury prices at huge markups. They decided there was a need for a “budget” model to compete. they didn’t want to lower their margins on their existing project–but, designing and producing an entirely new budget chip was deemed too much fixed cost. In they end, they came up with an ingenious solution–they doubled production on their chip, took half the run, burned out a big chunk of the circuits after production, and sold the damaged units as the “budget” model alongside the so-called “high-end” model with the original chip.

It wouldn’t surprise me. In the semiconductor industry, a company will sell a chip that can do ABC, except sometimes a customer will call and say “Hey, we only need a chip that can do A and C. Can you sell us some chips like that at a discounted price?”

“Sure!” says the company. So what they’ll do is take those same chips and disable the “B” part and then sell it at the discounted price. (Even though leaving in the B part would have no effect on the customer.)

It actually cost the semiconductor MORE to produce those discounted chips.

I don’t know anything about that particular story, but the general gist of it is commonplace.

For a while, Intel was actually offering software upgrades for the cache on some of their processors. That’s right–a big hunk of cache was sitting there idle until you coughed up the $40 or so for an unlock code.

Where I work, we sell the same chip at various price points with some functional units disabled at the lower end. Ostensibly, the disabled units are defective–we increase yield by selling partially defective chips for lower cost. However, the defective unit rate is not perfectly matched with the demand for the chips at their different price points. As such, we sometimes sell chips with perfectly good functional units that have been disabled via fuse.

To be fair, the lower end chips are also put on cheaper PCBs with less thermal and power capacity, so it’s not like you could use the units anyway if you could somehow bypass the fuse. Still, it’s annoying to have to throw away perfectly good hardware.

Well the Intel Celeron and Pentium 4 cpu’s were very similar. The cheaper Celeron had less cache and ran at slower clock speeds. They cost about the same to make; however, Intel could price the Pentium 4 at a premium.

Also CPU makers tend to be conservative on clock speeds. That’s why some people over-clock their computers.

I’ve heard that batches of computer memory are tested and the speed is assigned to the memory depending on how well the test goes.

I doubt that any manufacture would go through the trouble of disabling a product so as to sell it cheaper. The disabling would cost more money.

Closest verified example I can think of is the Intel 80486SX processor, which was a 80486DX with the floating point processor (FPU) disabled, and sold at a lower price. Supposedly they were chips with defective FPUs, however.

Yes, it is true. It still happens today on both the hardware and software side. I am dealing with an example on the software side at work right now but it would be too obscure to be of any meaningful relevance. The reasons for it are fairly obvious. It is much cheaper to manufacture a chip or piece of software that can do everything and then selectively disable some features based on pricing than it is build completely separate versions based on price. The cost to manufacture individual units is extremely low once the infrastructure is set up (in the case of a software disable, it is essentially zero after the first one). It is building that infrastructure in the first place that is extremely expensive.

It certainly happens. The disabling part is almost free–chips these days have burnable fuses that can totally disable a subunit. If the bits weren’t disabled, people would just buy the cheaper units instead of the higher end ones.

At one point in the past, my company used firmware instead of fuses to disable units. There was the usual batch of hacker types that figured out how to reenable the units so they could save a few bucks on HW. But they weren’t the real problem–that was the Chinese businesses who would unlock chips and then sell them as the more expensive versions. Nevermind that some of these chips were defective, and obviously there was no quality control on the part of the Chinese companies. So, now we use hardware fuses that can’t be reversed.

This is true, and it even predates the strategies used with integrated circuits. Back in the 60s, IBM produced a line printer (can no longer recall the model number) that could print either 3 lines per second (the premium model) or 2 lines per second (the economy model). The only difference was that the “economy” model had an extra logic module – installed at extra cost to IBM – whose sole purpose was to cause one of the three print cycles to be a null cycle. So the top-end printer in printing three lines would go “print-print-print”. The cheaper, identical one, would go “print-print…wait…print”.

The first PC I ever actually owned was a 25 Mhz Pentium 486. I turned it into a 33 Mhz by changing the clocking on the MB. Overclocking doesn’t always work, but at that time virtually all so-called “25 Mhz” chips were at least 33 Mhz capable.

During the late 1970’s Exsysco (later National Advanced Systems) manufactured a clone of IBM 370/158 mainframe and sold them for a half million dollars or so. Two models were offered with a large price difference. I was assured by FE’s (field engineers) that the expensive upgrade from one model to the other consisted of removing a single wire.

I imagine some of the FE’s went into business for themselves, removing the wire for a much lower fee. :wink:

IIRC, they did the same with some of their mainframe computers of the era, like the IBM 360. This machine was available in many models, with various speeds and various functions enabled or not. I think they did the same thing for at least some of those, selling more expensive hardware that simply had some functions there but disabled.

Control Data Corporation (CDC) did likewise. The CDC 6400 has an instruction called “Exchange Jump”, with roughly the same purpose as the INT instruction of the 80x86 chips: It enabled user-mode programs to send messages to the operating system, to request various system services. A single wire on the backplane, by its presence or absence, enabled or disabled this. Without it, system requests were done by a different method that was rather inefficient and tended to slow things down. Customers paid a premium to get the model with that single extra wire installed.

Even landlords did similar things with the apartments they rented. If a landlord had a two-bedroom apartment available, and a prospective tenant came looking for a one-bedroom, the landlord might nail one bedroom door shut and rent the rest of the unit as a one-bedroom. I was offered such a unit once, but I didn’t take it for other unrelated reasons.

At one of my jobs, we purchased chips that were technically spec’ed to run at a higher speed than we needed. By contract, we agreed to never run at the top speed. This allowed the vendor both to use a looser verification system, and to sell off their corner cases that fail the top speed but work the rest of the time.

Originally the 8086 was the CPU and the 8087 was the floating point coprocessor. Intel followed this numbering pattern for later chips, so the 286 (80286) was the CPU and the 287 (80287) was the coprocessor, and the 386 was the CPU and the 387 was the coprocessor. The 386 came in an SX and a DX version. The SX version was 32 bits internally but used a 16 bit external bus, which made it slower but could be used on cheaper systems.

WIth the 486, Intel also named them SX and DX, but instead of a data bus difference the performance difference was whether or not the chip had a working floating point coprocessor in it. The SX was the cheaper and less powerful version without the coprocessor. Originally, the SX was just a DX whose floating point processor failed its test during manufacturing. They would sever the connections to the floating point processor and would label it as an SX. You could then buy a separate 487 floating point processor if you wanted to upgrade your system later.

While they called it a 487 coprocessor, and sold it with the 486SX as if it were another x86/x87 pair, the 487 was actually a fully functional 486DX. It had an extra signal in it that disabled the existing 486SX when you installed it. So instead of it being a coprocessor it was actually the complete processor.

Later, Intel sometimes crippled perfectly functional DX chips because they had more orders for SX chips than they could fill.

The same sort of thing is still done today. One of my PCs here has a 3 core processor which is actually a 4 core processor with one dead core. When they made the 486SX they physically severed the coprocessor’s connections to the outside world, but with modern multi-core processors they just disable it by setting bits inside the processor. The thing is, while this allows the chip maker to sell off a lot of “bad” chips, they also often end up with more orders for the cheaper chips than they have available to deliver. So what they sometimes do is intentionally disable a perfectly functional core and sell it as a cheaper version. People have gone in and re-enabled the supposedly dead cores and have found that sometimes they still work, which gives them a better chip than what they paid for.

Well, my AMD CPU has four cores, but was sold as a 1 or 2 core, cheaper than a guaranteed 4 core with the implicit promise it could be unlocked at own risk.
Unfortunately my Linux — in this case — can’t see more than one core. Which is because Microsoft extended the open-standard ACPI routines, as was proper and customary for Microsoft, to make them Microsoft-friendly *, and although the BIOS unlocked the cores fine, Opensuse installation ( even by text-mode ) stalls unless one disables certain options in BIOS. Which is down not to Linux nor Microsoft, but to ASUS, the motherboard manufacturers.
Anyway, AMD sold the CPU with parts disabled. Which is actually a good thing; better than only having one premium unit for sale.

*One thing I find myself wondering about is whether we shouldn’t try and make the “ACPI” extensions somehow Windows specific.

If seems unfortunate if we do this work and get our partners to do the work and the result is that Linux works great without having to do the work.

Maybe there is no way to avoid this problem but it does bother me.

Maybe we could define the APIs so that they work well with NT and not the others even if they are open.

Or maybe we could patent something related to this*
William Gates III — Comes Trial

Interesting thread, conceptually, in economics in general. About the 486 (and nice post,
ecg), I too remember that brouhaha over a defective release–significant culturally, for, if anything, being one of the first complex technical issues about computers that people, like me, who had no idea of the significance of which but were supposed to have an opinion about.

Now those things come far more often.

[Perhaps that’s not clear,* so I’ll give a similar case, as an anecdote: I remember beginning negotiating with a Steinway salesman, and asked him if the problem with the new bushing material had been attended to. Bushings are the little pieces of material that circle each hammered-down tuning pin holding each string. I had no idea what they did one way or the other, but it was a thing, supposedly, of concern at the time. (He immediately saw I had no idea what I was talking about, and assured me everything was a-ok.)]

ETA: *The wretched grammar in my first sentence is one for the books, I must say.

Single sided and double sided 5 1/4 inch floppy disks were identical; the only difference is the doubles had both sides tested before leaving the factory and had a square cutout to flag that to the disk reader. You could buya device like a nail clipper to convert a single to a double.

That wasn’t true initially, but once double sided disks became popular it became cheaper for them to have just one assembly line and they did it exactly like you said.

I only ever saw a disk hole punching tool once. Most of us just used a regular hole puncher like this:
http://everydaylife.globalpost.com/DM-Resize/photos.demandstudios.com/getty/article/77/204/87748289.jpg?w=600&h=600&keep_ratio=1&webp=1

The nice thing about the professional tool was that it automatically punched the notch in exactly the right place with no measuring and no fuss. If you used a hole punch then you had to figure out exactly where to put the notch yourself, but that was easy enough to do. You could eyeball it just by looking at the disk label.

Ahhh 5 1/4 inch floppies.
[/nostalgia]

Yes, scr4 has it right. And it was sold as such. Few needed the FPU in those days. Most software did not take advantage of it.

I remember doing that at college. A quick and dirty way to get double-sided without punching an extra hole was to open up the end of the floppy casing, take the disk out and put it back in the other way up.

We got away with doing this with our bare hands - I guess data densities were low enough that the odd slight fingermark wasn’t fatal.

There were some AMD Duron chips that were the same as the equivalent Athlon - they had a collection of little gold pads on the top surface - on the Athlon, some of these were linked by lines of conductive paint; on the Duron, they weren’t. You could upgrade your processor by drawing a pencil line on the chip (graphite is conductive).

Except… it didn’t always work, because (as others have said upthread) the chips are often tested and binned at production - those that are capable of performing as Athlons get the bridge painted on them - those that are not, are sold as Durons.