Did this happen (80's computers)

But it’s not a “trick”, it’s just pragmatic business and very logical.

Some number of customers can’t afford the full power, this method is the simplest, most cost effective method of not only getting them the power they can afford, but also providing a simple mechanism to upgrade to more power in the future when they can afford it.

No mfg changes required. This happens naturally as yield limiters are eliminated.
However I certainly buy that knowing this would happen they designed in features to allow them to sell parts at a slower speed/lower price point. And I definitely buy designing in upgrade paths. Our machines can be upgraded without even being shut down.

What I’m not seeing is a machine that in inherently at speed N from the beginning being sold at speed N-X from the start. By inherently speed N I mean that almost all parts passing tests go into that speed bin or better.

I’m not clear about what your argument is, but I learned about price discrimination in introductory economics. More generally, the concepts of oligopoly and natural monopoly are core to the discipline.

Econ 101 != Market fundamentalism

I suppose I was using “Econ 101” as metonymous shorthand for “market fundamentalism”; of course the field of economics is capable of taking a more sophisticated view than the assumption that all industries act at all times as perfectly competitive.

IBM absolutely did and does this. I ordered processing capacity on as400’s multiple times that involved no hardware changes to the system, just an SE performing some action.

Here are some current “capacity on demand” details (unlock capacity with an encrypted key):

here’s another example from 1999:
https://www.gartner.com/doc/304500/gets-expensive-performance-governor

Thanks for the IBM examples in response to Voyager; I also provided an example earlier that had nothing whatsoever to do with ICs and simply involved slowing down a printer with a module that introduced a null cycle.

I quite disagree, though, that this isn’t a “trick” but simply good business. It depends on the specific case and, of course, which side of the fence you’re on, but I tend to think of it as not just a “trick” but downright unethical to artificially cripple a product if such product is still profitable at the lower price, merely to provide an opportunity to charge a premium for the unaltered product. It makes sense when there are intrinsic differences, like chips that fail some performance or functionality tests, but otherwise it’s just exploitative, done in the interest of “not leaving money on the table”. It skews the balance of priorities from making the best products to making the best profits. No surprise that such practices were commonly associated with companies that had either near-monopolies in the general markeplace or were dominant in their niche markets.

There are other factors, though. In my business, almost all the chips we sell are crippled, except for a premium line with a handful of extra features that stay enabled.

Taken by itself, this looks pretty bad. But those features cost real money to develop, and the premium pricing allows us to pass those costs onto specific customers instead of the general public. The actual HW cost is trivial; the development is the expensive part.

Furthermore, these premium parts have extra software features. The HW acts as a kind of security token (“dongle”) for those SW features, which also have a high development cost. No one complains that software pricing is totally disconnected from the distribution cost (distribution is basically free whether you’re talking a $0.99 smartphone app or a $20k workstation product). It shouldn’t really be too surprising that the same could be true for hardware with a large software component under some conditions.

I personally tend to follow your philosophy more than the IBM philosophy. I tend to put myself in my customers shoes and create something I would like to use with very good efficiency and utility. It feels good to create a good product (IMO).

And I do get frustrated when I have to deal with IBM or MS making these types of choices and having to deal with an inferior product.

But having said all that, I still wouldn’t call it a trick, just the nature of business.

I disagree with the latter claim, on the basis of college economics. I’m not saying that chip companies are never exploitative, merely that this doesn’t follow from what we know.

Consider an extreme case. Say it costs $1 billion to make a chip fabrication plant and another $1 billion in research. That’s $2 billion in fixed costs. Once that is paid, making a chip costs $5 in materials and labor. The company can charge one price for its product and attempt to recoup its fixed costs that way. Or it can charge more money for customers who value the product more and less money for customers who value the product less. If Intel limited the production of SX chips to those that were defective, they would end up charging more for them over time-- the result would be fewer and more expensive low end computers.

Essentially, you can spread fixed costs over a wider market or you can drop the costs of the chip for businesses and not serve the bargain market at all. I don’t see that as an attractive development.

I can see how that can be true in specific cases. If it costs a lot of development money to create certain advanced features, but essentially zero incremental cost to build them into every product, then the strategy of disabling them for those buying the non-advanced version makes sense and is fair. But accusations of exploitation are far more credible when those dynamics aren’t there, such as any complex product with a lot of parts where manufacturing costs are significant – like traditional mainframe processors or printers, two of the examples mentioned.

The idea of a hardware feature as a security token for software features is an interesting angle I hadn’t thought of – but it sounds like a fairly special case.

Your “security token” comment reminded me of something that DEC did once, but note the fundamental difference. They developed a full-fledged FORTRAN IV compiler for the PDP-8 minicomputer, which was an amazing thing for a tiny little minicomputer to have. The hitch was that it would only work if you had the optional Floating Point Processor. But this wasn’t an artificial marketing restriction, it was a technical one. The FPP added a vast new instruction repertoire to the PDP-8, not just floating-point instructions but a big suite of powerful general-purpose instructions. It cost a good deal more than the PDP-8 itself and in effect turned it into a far more powerful machine with a completely new architecture.

Yet even there, despite the technical necessity for the FPP, DEC adopted a customer-friendly policy. They soon developed an FPP emulator which was shipped free with the FORTRAN IV system, removing the FPP hardware restriction. The compiler still generated FPP code, but it could now run under the emulator.

It all depends on how and why those pricing decisions are made. Some of the examples given suggest that the price charged for the “crippled” product still covers its development and manufacturing costs, and that the premium version is differentiated purely as a cash grab from price-insensitive market sectors. This is a classic behavior of oligopolies.

Well, yes. What they are doing is capturing a bigger proportion of the area under the demand curve. Consumers are benefiting from this state of affairs since they purchase the products voluntarily. But the producer is capturing a bigger slice of the pie.

Still, I’m not reaching for pitchforks. The fact is that this is a high fixed cost/low marginal cost industry. That creates barriers to entry and oligopoly, but it also implies that the sorts of prices that occur under perfect competition (P=short run MC) are not sustainable as they won’t cover the costs of building the fab plant. Furthermore, Intel does have meaningful competitors unlike some companies during some spans of time.

It might be interesting to calculate what share of Intel’s revenue was bled off in the form of excessive executive compensation and supernormal profit, however defined. I suspect a lot of it was captured by researchers and executives outside of the board room though. There’s a lot about corporate America that irks me, but this isn’t one of those issues. Maybe it should be.

I don’t remember the company’s name (they’re undoubtedly not around anymore) so I can’t find a link just yet, but I remember reading about a graphics card company in the 90s that was involved in a scandal of sorts. Long story short back then all PCs had separate video cards and 2D and 3D capabilities were becoming all the rage. A third-party created an industry standard benchmark test program you could run on a PC that would give you some idea of the video cards performance. Well somebody got the idea to hard code elements of the test program into their company’s video card. IOW they made it so the specific industry test app would run really fast, but otherwise it wouldn’t perform any better in the real world than any other comparable video card.

And they got caught, by I believe a fairly well known (at the time) PC columnist/programmer.

There were similar shenanigans with some disk drive companies a couple of decades ago.

I won’t name names, but that story is definitely true (the columnist in question worked on my team). Still, it was pretty much par for the course in those days, and I’ve heard worse stories. Like the one that “optimized” a benchmark by only rendering every other frame. The benchmark had a frame counter that might have caught this, but the company cleverly did not skip rendering on just that part of the screen covered by the counter.

Standards for behavior are much better these days (in our segment), but plenty of shenanigans still occur in other areas.

Within the past year, one mobile device manufacturer (who shall remain nameless) was caught overclocking their chips based on an executable name matching a particular benchmark. The device would have overheated at these clock rates (not to mention drain the battery in short order), but the vendor knew the benchmark didn’t run long enough for this to be a problem.

Well the silicon costs a few cents, so it doesn’t much matter if they give away extra copies of a 486 cpu… its just an upgrade fee.
The simplest way to explain these things is that they are selling the license to the feature. This pays for the R&D of the feature.

But the intel idea was that old cpu remain in place, so that they sold more cpu’s… the upgrade to 486DX didn’t result in a 486 being freed.
Watchguard were selling firewalls with ports crippled. The ports were there but unusable until you paid for more, but they seem to have stopped that , perhaps since it seems too obvious scammy (even if it isn’t). They still sell performance/spec upgrades with no hardware change.

It happened again just last year with mobile phone benchmarks.

septimus and wolfpup. For those of us that grew up in DOS, can you explain your stories?

You mean about the PDP-10 UUO? Not much to explain. The PDP-10 had 64 unimplemented opcodes all beginning with “0” (0000 to 0100 octal) which the hardware trapped to specific locations, also saving the address where the UUO had been executed. In that sense it was like a subroutine call – it performed a specified operation and then returned control to the caller – except that the upper 32 UUOs trapped to executive space – i.e.- the protected address space in which the OS was running (the PDP-10 OS was referred as “the monitor”). The address part of the UUO pointed to an argument list whose contents depended on the UUO’s function.

So what you had was a structured way for a user program to transfer control to the monitor and request system functions, and pass and receive arguments – i.e.- an implementation of system calls. For instance, in my examples, “LOOKUP” was the mnemonic for a system call that opened a file and made it available for reading or writing. “TTCALL” was a generic UUO class for performing terminal I/O functions – things like read a character from a terminal, read a line, output a character, output a string, etc.

There were quite a lot of them in total in a hierarchical-like structure that was fairly complex, such that the system calls documentation alone comprised two large volumes. Compared to today’s operating systems, the PDP-10 monitor in many ways had a similar level of complexity but because of the cost of memory and processor cycles it was orders of magnitude more compact and efficient. Microsoft should be embarrassed that I can run a PDP-10 emulator under Windows that can seamlessly timeshare dozens or even hundreds of jobs, yet Windows itself can’t even properly timeslice between two applications.