Do Companies Sometimes Sabotage Their Products Before Selling Them?

I remember reading somewhere about a computer chip manufacturer that sold both high-end and low-end processors.

The interesting thing, however, was in how it made the low-end processors. The company only had one manufacturing line for the high-end processors, but some of these would be run through another machine that would destroy parts of it, rendering it less powerful, and they would then sell these chips at a lower price.

Am I recalling this accurately? Are there other examples of other manufacturers doing this??

If so, what is the economic explanation as to why a company would deliberately damage a product in order to sell it at a lower price? Why not split the difference in the price and only sell the high-end product??

Thanks.

Have you ever purchased a car? They are available with different trim levels, from econobox to luxury. In the case of a car, it’s probably simpler and cheaper to leave the air conditioning system off of the econobox trim level, but for a computer chip, it may be simpler and cheaper to start with the same processor and then disable part of the functionality.

As to why not just sell the middle range, presumably someone calculated that the profits would be maximized this way.

I’m guessing that what you’re thinking of in this case is a little bit different…

Chipmakers do,indeed, sell chips from the same batch under different specs at different prices. But this is only because of unpredictable defects that occur during manufacturing.

When a wafer-full of chips are separated and tested, some of them will pass the test at the highest clock speeds that the chips were designed for, and these can be sold at full price. However, some of those that fail the high-speed test will still pass at lower clock speeds, and these are sold as lower-speed processors (at lower prices). A quick look at a product list, however, only shows two differently-priced processors and a customer may be surprised to find out that both versions are “identical”.

This seems very much like what you’re remembering. And to the more general question, I seriously doubt that any manufacturer damages their product before shipping it - after all the trouble it took to make it in the first place. They’d be much better off selling a few more high-quality products.

When it comes to chip fab, what I think you are referring to is not deliberate sabotage. The manufacturing processes for CPUs don’t produce the high end processors anywhere near 100% of the time. Some of them which won’t pass tests at higher clock speeds may work at lower clock speeds and are sold as such. Sometimes, they can mask off non-functioning parts of the chip and sell it as a lower end processor which doesn’t have the features they turned off. Cache generally takes a very large percentage of a modern CPU’s die space, for instance - if the cache fails, they may be able to mask it off and sell it as a lower-end chip with less cache.

People who know more about chip fab than I do will probably be along to expand on this.

They are not damaging the product to sell it at a lower price. They are removing value from it in order to encourage sales of the fully featured version at a higher price.

This isn’t an unusual manufacturer’s technique, it is often cheaper to make a single version of a chip, or related software, and then disable parts for the “entry level” product. You’ll also often find hacks on the web that allow you, if you have the resources and the nerve, to re-enable these parts.

Why would a manufacturer do this? You always needs to remember that a company’s aim is not to make the best product, and not to sell the most product. It’s aim is to maximize its profit on each unit of the product, and if selling a ‘sabotaged’ version increases the preceived value of the “non-sabotaged” version, then they may well maximize their profits this way. They are selling the full version product at a higher price than the “entry level”, but their production costs are practically identical! This is a capitalist’s dream. It’s money for nothing!

Sadly I have no cite for this but I’ve read that one some of the chips that didn’t pass testing there are bits that are disabled or deactivated.

An example: Say my company makes a super-duper chip which is ultra-fast and has functionality A, B and C. We also make a slower chip which does just A. Now it turns out that some chips off the super-duper production line don’t quite come up to scratch speedwise due the unpredictable defects mentioned earlier. It’s easier to kill the B and C functionality (by removing pins of the chip if it’s been designed well enough or some other easy process) and sell it as the slower version.

Why not simply sell a slower A, B and C chip ? Probably marketing reasons, we want to differinate the two product lines. Why not leave the B and C functionality active ? 'cos someone will notice and use it.

I could see companies doing the same thing even with non-defective chips. If demand isn’t enough to run a production line full time for the super-duper chips it’s probably still cheaper to run it full time and just downgrade half the chips. The investment was in the design and the machinary overall, not really raw materials per chip.

This is how I understand, but I’m not really a hardware guy so I could be waaaay off.

SD

How about printers? It’s pretty clear that your average printer today is not designed to last, unlike some of the older office printers that are still going. Is planned obsolescence the same thing as sabotage?

Actually, I think what the original poster might be thinking of the 486SX processors, where they really did just intentionally cripple the chips.

It’s not helpful to mix discussion of hardware and software in this discussion, because software is a different case altogether.

With software, it’s not that they’re damaging the product once it’s done - it’s that they are offering different licensing options and deliver a package that implements the license you’ve paid for. A totally different case.

If you wanted to extend this to software, you’d have to come up with a scenario where a company went through the trouble of writing perfect code, then had a randomizer edit the code so as to introduce bugs, and then sell the “buggy” version for less money.

I don’t think this is going to happen, just as I don’t think that you can come up with a hardware example of the “manfuacturer’s technique” that you claim to have knowledge of.

I think most board designers would disagree. An A-type processor that does “extra” things not in the spec would be almost as defective as one that didn’t do everything it was supposed to.

Besides, if only some of the processors that you’ll get in a batch support B and C, then you can’t very well design your board to support B and C, anyway. The complexity of circuitry to “notice” that B and C work on some specific chip and turn on the functionality is too mind-boggling to contemplate. There’s no value in leaving B and C turned on, and it would just make your A chips be out-of-spec.

I don’t know about the OP, but I was thinking Celerons.

I have heard of cases of products having a built-in lifetime, meaning that they are designed to break after a certain period of time so the customer is forced to buy a new one. Perhaps most notably is the mobile phone industry, i’m sure if you do a quick search on google you will come up with countless sites claiming intentional sabotage to shorten the lifetime.

Another case is that of the iPod, apparently the battery is designed to last for approxiamtely 18 months (of course how much you use it/recharge it will effect this value). The manufacturers will replace the battery for you but charge nearly as much as it costs to buy one in the first place. Of course it is nearly impossible to prove any of what I have said but I thought I would share what I have heard.

There is some buzz about HP putting expiration dates to some of their printer cartridges, so that they stop working no matter how much ink is left in them.

Hmmm, I’m trying to say that the chips are completely different and used for two different things.

I’ll try again let’s say I have two product lines, call them SuperChip and CheapoChip. One is high-end and used by one set of boards the other is entry level and used by a seperate boards. The specs for the SuperChip say that it fits a standard XYZ socket and provides functionality A, B and C rated for speeds up to X.

The specs for CheapoChip say it also fits a socket XYZ and provides functionality A for speeds up to Y.

Boards that can take CheapoChip can also take SuperChip but shouldn’t use functionality B or C as they’ll not work with CheapoChip. You could plug a CheapoChip into a SuperChip board but it’d not work due to the lack of functionality.

What the world-in-general doesn’t know is that CheapoChips are just SuperChips that failed the speed testing and then add the B and C functionality deactivated. Because B and C are something you only get on the high end SuperChips not something that everyone has.

How to disable the functionality ? Well if you designed this in from the start while doing designing the chips you could arrange for the power to the B and C functionality bits to be seperate from the rest, and provided via a seperate pin. Removing this pin would disable power to those sections. I’ve no idea if that’s possible on a chip level but there are other ways it could be done.

Does this happen ? I dunno. Is it feasible ? I don’t really see why not. Once again, I could be waaaay off but I’m sure I’ve heard that this is done (486SXs being one case, as mentioned above).

Wow, I’m an old fart now since I lived this stuff. The first 486SX chips were DX models that failed testing for the built in math coprocessor. They would just laser out the connections between the portions of the chip. When the SX got popular they didn’t bother with the testing first, they just chopped out the coprocessor and sold them. When the Pentium II was selling there wasn’t a good choice for a lower model chip available. the Pentium was long in the tooth by that time and the Pentium Pro was always the slightly incompatible bastard child of the product line. The solution was to laser out the cache on the Penium II chips (again, first on the ones that failed cache testing, later on any they needed) and sell them as Celerons to keep the price premium they were getting for the Pentium II name. To this day Intel keeps the details of the Celeron quiet so as not to cut into the top of the line prices.*

Xizor’s example is also a good one.

*All Intel info was from rumours and trade rags at the time. Since I wrote it from memory it may or may not be entirely accurate. If you need to trust this information for any reason then please verify it from another source.

The firmware for the Canon Digital Rebel camera is essentially the same firmware as the more expensive Canon 10D…only some of the features have been “crippled”. (of course there are also hardware differences between the two)

A Russian programmer found that by switching a single byte in the firmware image, he was able to enable some of the 10D features…he did so and many people are using the hack.

Not going to supply the links…but Google will provide more specific info for the curious.

Both Intel and AMD to this day cut down versions of their top chips to sell as budget CPUs. Celerons today are Pentium 4s with less L2 cache, a slower front side bus, and no hyperthreading. Celeron M chips are Pentium M with less L2 cache, and no speed step power saving tech. AMD does a similar thing with their Socket 754 Sempron chips - they are Athlon 64’s with less L2 cache and no 64 bit support. This lets the companies sell the chips that have some L2 cache that has gone bad.

You see a similar thing in video cards as well. Nvidia’s Geforce 6800 GT has 16 pixel pipelines, while its cheaper Geforce 6800 cousin uses the exact same chip, but with 4 of the pixel pipelines disabled. Using certain programs, you can sometimes unlock those 4 pixel pipelines, but sometimes they really are bad, and not disabled for marketing reason. You see the exact same thing with ATI’s X800 XT and X800 Pro cards; the X800 XT also has 16 pixel pipes, and the Pro version has 4 of them disabled as well.

Foible’s recollection matches mine. It started out as a way to improve yield, and became a way of selling processors at a lower price point.

I’m not aware of chips that are intentionally disabled for speed, but I believe that mainframes and microprocessors had switches to turn on higher speeds at one time. For ICs, it is all a matter of yield. Just like filling airplane seats, being able to sell extra chips on a wafer is like found money.

Since there is always a speed distribution, it is okay to sell lower speed chips from the distribution at lower prices, assuming there is a market for it. Many of the higher speed versions of a processor come automatically from process improvements, and some come from very minor changes to the logic that fix the slow paths between memory elements that cause the processor not to run faster. Often extra gates are put into a design so this can be done without a major mask change, which is expensive. Speed binning is not a matter of ignoring defects, but dealing with natural process distributions.

Now we do deal with defects - almost all large on-chip memories have extra rows and columns. The memory is tested, and bad rows and columns are replaced by spares through blowing fuses either by laser or electronically. There is even technology to do this in the field, every time the processor is reset, but that is used less often.

I’m in an R&D section in a company developing for cellphones. I haven’t heard of any sort of sabotage though (in Japan) it isn’t needed as new functionality is coming into the phones so fast, you have to upgrade once a year (and the phones are free anyhow.)

The people at Motorola that I have met all seemed nice, but I must admit I wouldn’t trust a Motorola phone to be properly put together in the first place to be worth sabotaging. True, the most buggiest phone I ever saw wasn’t Motorola, but from #2 on they pwn bugginess.* NEC was quite solid. Siemens, Sony-Erricson, etc. all seemed to just be not amazing–but not sabotaged.

My guess would be that the people developing cellphones for the US just are trying to achieve the same functionality as Japanese phones, but without the technical background or R&D to do it. Previous to the last two or three years, their biggest technical challenges were to make sure a phone can always dial 911 and to ponder the eventuallity that someone might force them to make trading phone numbers with other carriers a reality. Then suddenly they are having to start doing color screens, Java VMs, bluetooth, and MP3 all in the same amount of space as they already had completely full just making a phone call.
So pretty much, US phones just aren’t made well and will die merrily on their own.

  • As of last year, before I moved to Tokyo. But they can’t have totally replaced their dev team since then, and probably RAZR works well (if it does) just because all the software devs coded around the bugs in the OS. …I am impressed they got something thin though.

Having been in the cellular industry at its inception, I can assure you that Motorola phones stunk even back when only American companies did cellular. It has nothing to do with their technology, and everything to do with the culture of quality.