Can innovation become so rapid it destroys capital incentives

When I got my first MP3 player during Christmas of 2003 it was $119 (marked down to $99), held 128MB (roughly 32 songs) and had no functions other than storing MP3s and an FM transmitter.

In summer of 2008 I bought a 4GB model with a 2GB expansion card for $50. It had a video function so I could watch movies on a tiny screen.

So in the course of 4.5 years the capacity and price changed dramatically.

Once wind power became comparable with coal at about 4 cents a kwh, wind investments took off. However other wind technologies like low velocity turbines, high altitude turbines and vertical turbines claim they can produce energy for even less, around 2 cents a kwh. A company called kitegen claims when they get off they ground, wind power of 1/10 a penny per kwh is possible.

So why invest in a new technology if you know that it is going to take 5 years and hundreds of millions in capital to bring your product to market, and by then it is going to be obsolete or too inefficient to make a profit? Even if it takes your competitors another 5 years, that still leaves a very narrow window to recoup your investments and make a profit.

If concept A is obsolete within 3 years, that means there is technically only a 3 year window to recoup losses on bringing that product to market. After those 3 years the new product will be on the market and make yours obsolete.

Blu rays are starting to replace DVDs, which have only existed in the mainstream for 10 years. That is still a large window (10 years), but it seems the windows are shortening. A technology becomes obsolete or too slow within a few years of release. Blu Rays will likely be replaced by holographic DVDs in a few years.

So what happens to the incentives of capital to invest in all these groundbreaking advances to bring products to market? Do they lose interest and see everything as too short lived to make a return and a profit? Has this already started happening? Will it get worse over the next 100 years as advances come quicker and quicker?
With information technologies, advances come quickly. But so do advances in energy and medical technology. So it seems they may have the same problems.

The initial groundbreaking product does cost more, but it can be priced much higher for the early adopter market. Once you are in the market, and have the experience of creating a product, a lot of the advances are incremental and not very costly. For the MP3 player, the increase in capacity comes from better flash memory technology, which you don’t even have to invest in. if you can keep being first, you can make a lot of money, like Apple has.

Me too products don’t command the same type of prices, but are much cheaper to design since you already know what works.

The real danger is that you invest a ton of money in something that doesn’t take off, like Betamax. You can also be slow at developing a potential winner,and get beaten to market and thus be unable to command high margins on introduction.

It makes life interesting, but the possibility of high returns will keep the capital flowing. After all, no one knows how long it will take for holographic DVDs to be feasible or to get significant market share. Consumers have an installed base also.

Betamax was Sony’s own fault. It was a superior format to VHS. Thing is Sony insisted on keeping the whole thing proprietary and would not license it. So, everyone else built VHS and with competition came lower prices. It was not as good as Betamax but it was good enough and so the lower price saw lots more sold.

With the much deeper market penetration after awhile no one wanted to even make movies on Betamax (couldn’t sell that many) and so Betamax withered away.

Sony got greedy and it bit them in the ass.

I think this is a really good question. The answer is yes, in some cases. When evaluating a product development plan, you always have to consider the profit that can be made, the time you have to make it, and how the cost curve is likely to bend. This will often to affect design choices to some degree.

Where you can see this now is in home video game consoles. They are many generations behind, and if someone came up with a brand-new console based on the latest technology, it would be significantly better than what we have now. However, development costs are phenomenal for these products, and therefore the companies cannot afford to release new models every year, or even every two or three years (time to develop isn’t as important, because you could have parallel teams working on new versions as technology becomes available if that were the only issue).

So basically, at the level of development cost we’re at for game consoles, the cost of the development dictates that we’re not going to see new ones more than every 5-8 years.

Another example is drug manufacture. In this case, it’s not rapid technological change that limits them, but the artificial ‘technological change’ imposed by the patent system. A drug has 20 years to recoup its investment from the time the patent on it is filed. At the end of 20 years, the drug can be manufactured by generic drug manufacturers, and profits plummet to commodity levels. That’s basically the same kind of situation as having a product that will be obsoleted by technology.

The result in the drug industry is that drug makers shy away from making drugs that only apply to narrow markets, that run the risk of being tied up for long periods of time in FDA trials which eat into the available time left to recoup investment.

In the case of drugs, two other factors come into play - if it’s a life-saving drug, demand is inelastic (you’ll pay whatever you have to), and the patent gives the drug manufacturer a monopoly. So the result of limited patent lifetimes and the outrageous cost of drug certification (over a decade and upwards of a billion dollars on average) is the main driving factor behind the high cost of pharmaceuticals, and a major factor in limiting the kinds of drugs being researched.

I’m sure there are many more examples.

Failures have many fathers. The point was that being first in the market (and making that investment) is no guarantee of success in that market. Being technically superior isn’t either. A new product becoming obsolete early is not the only way you can lose your shirt on it. On the other hand, someone a bit late can hope that the market comes their way, not the way of the company who got there first.

Console makers have the advantage of locking customers into a proprietary technology. Upgrades are backwards compatible, which encourages customers to upgrade to a technically superior console in the same family, but swapping consoles means you might lose access to new versions of games tied to a console you had.
Appealing to a new, more or less untapped customer base, like Wii is one solution. Games that extend console capabilities make an old console seem newer. I had an Atari 2600, and the early games like Space Invaders look a lot different from a later game like Empire Strikes Back. In any case, if console makers released new revs every year, the software developers would never be able to keep up.

PC makers manage to release new revs constantly. It could be done. The console makers have admitted that it’s a cost matter. If your console costs billions of dollars to develop, you simply can’t afford to obsolete it in a year or two. In fact, the need for a long shelf-life pushes console makers like Sony into selling their consoles at a loss for the first couple of years - they have to go right to the bleeding edge on hardware at the beginning so they don’t wind up too obsolete too fast.

Well, companies factor in the capital investment over several generations of products.

Apple releases new iPods every year making their old models “obsolete” (or at least worth much less). Intel does the same with new cpu models. Intel even goes one step further than Apple by publishing a “roadmap” of their upcoming chips. Intel plainly gives you a peek at their future chips even before today’s chips are obsolete.

Those large companies don’t even wait around for the competition to make their products obsolete – they deliberately do it to themselves. Sony and Samsung LCD TVs are another example. So one difference is you have to be a large company that’s in a mindset of creating a continuous “pipeline” of products. You can’t be a one man garage that dumps his life’s savings of $1 million into a single-purpose factory to manufacturer one gizmo that only has 3 years of life.

The other factor is that their capital investments also produce internal products and knowledge that’s separate from the external products you see on the retail shelves. With your DVD-to-Bluray example, when Sony/Toshiba created DVD, they worked on various technologies to position lasers, or reduce noise in the circuitry, or shrink processing chips and energy consumption. All this “intellectual capital” gained during DVD research is re-used, sometimes with very little tweaking, for BluRay. Actually, a more complete picture of the lineage would be CD player → DVD → BluRay. If you look at the geartooth assembly & motor of a disc tray on a BluRay player, it doesn’t look that much different from CD mechanisms of 20 years ago. Engineering knowledge is re-used (or “leveraged” as they like to say in business-speak.) Maybe a new factory had to be erected to make BluRay players but many of the capital investments are not just factories: it’s the output of knowledge from trial & error.

Sometimes the reason is unrelated to technology. The inventors (and their investors) expect to make a profit but they misjudge the velocity of competition or fast shifting consumer trends. They are over-optimistic, or short-sighted, or both. The Pony Express only lasted 18 months as telegraph lines were being erected; they also didn’t make a profit.

I suspect console makers do invisible revs also, with manufacturing and parts cost reductions which are invisible to users. Major PC architecture changes are tied to OS changes, which don’t happen that often either. Dealing with speed bumps and bigger disks are relatively trivial. Remember also that console makers usually have one model, while Dell has millions of possible configurations, tens of thousands at least dealing with hardware only. New hardware runs old programs just fine; issues mostly have to do with software and peripherals.

I’m sure that extending the life of a console is part of the reason for pushing performance, but I suspect more of the reason is to have a competitive advantage against XBox. A console which renders only slightly better than an existing one is unlikely to get many people to jump ship, given the cost in a new console, learning curve, and needing space for two consoles to play current and new games.

Sure. All that stuff goes into it - as does the fact that the investment and infrastructure required to make a gaming console has pretty much frozen out any vendors who might compete against the big boys.

Yup. Very high barrier to entry. Ditto for the microprocessor market, since the last few attempts got crushed by Intel. The PC market, on the other hand, is much more open. People build PCs from scratch - no one builds gaming consoles from scratch. Someone with a reasonable amount of capital could purchase components and book manufacturing time in China with no problems.

There is a basic contradiction here. The rate of innovation is determined by the decisions of the companies (i.e. to use the jargon it’s endogenous). If the rate of innovation becomes so fast as to seriously lower the return to capital, then companies will slow down the launch of new products till the return of capital rises again. As mentioned earlier this may be happening in consoles, where there is talk of extending the current generation beyond the usual five years.

In other words there is an equilibrium rate of innovation where firms earn their target rate of return given cost and demand conditions. The rate of innovation depends on the cost of the new innovations and how much consumers are willing to pay for them. In consoles, it appears the average consumer isn’t that interested in better graphics and the processing horsepower it requires; hence the success of the Wii relative to the PS3 and 360. Consumers do appear interested in more intutive UI and that’s where the innovation is happening with Project Natal and the like.

Betamax was inferior in the only way that mattered – the tapes had a shorter recording time; an hour initially, and when Sony eventually introduced lower quality recording, recording time was still half of what VHS tapes offered at comparable quality. Sony never refused to license Betamax – getting it adopted as a standard was their entire strategy., but JVC didn’t want Sony to dominate the market, so they went ahead with their VHS format and dominated the market themselves.

One thing that hasn’t been mentioned directly is “network externalities”. Some products increase in value as a greater number of people use it. They become the “standard” for other related products and services to adapt to and those networks become dependant on each other.

A perfect example is the internal combustion engine automobile. We’ve had electric car technology for years, however what keeps us tied to the gas-powered automobile is the vast networks of infrastructure and services designed to support those vehicles. Even if I invent a car that doesn’t need gasoline to run and has the same performance as a gas-powered car, where do I charge it? Even if I can plug it into any outlet, can the power grid support an additional 100 million vehicles charging off it? Where do I charge it when I’m between towns? Where do I get it serviced? Where do I find third-party equipment manufacturers?

It very much becomes a chicken and egg problem. You can’t switch to the next generation of technology until you reach a critical mass of users to support it. And you can’t reach critical mass until your provide enough support for people who want to use the technology. You never get past the “innovator” stage.

You are assuming too much rationality here. Companies may innovate faster than is financially prudent if they fear that some third party, with no installed base, will innovate instead and steal the market. Since lead times are typically long, if you wait until this actually happens you will be in trouble. As Sam said, there is a high barrier to entry in video game consoles, so these companies can afford to stretch out product life times.

As an example of how you can get in trouble, Intel decided not to design 64 bit X86 systems because they were making money on 32 bit systems and because they wanted the market for these to move to the new Itanic (Itanium - I worked on it, I can insult it :slight_smile: ) architecture. AMD had no such constraints, and stole market share until Intel caught up.

And even if every actor is perfectly rational, it may be hard to predict how things fall out. Everyone is acting on imperfect and different information and interpreting it differently, and even then does not know what resources the other have and how they will deploy them.

Not to mention that even perfectly rational information based on perfect information puts events into motion that often cannot be changed without great difficulty and cost. Companies become locked into their decision because they enter into legal agreements, invest in equipment, hire employees and so on. They often can’t just change their mind in a few months.

I wonder how this works for software. Do we really need 3 or 4 spreadsheet programs?
I never hear of anybody using LOTUS123 or VISICALC…EXCEL seems to fill most people’s needs.
As far as killing innovation…Microsoft can only keep making money, as long as people keep buying new versions of its software

Remember those spreadsheets predate Excel. Lotus is used in IBM, and I suspect has its fan base. The only real innovation in spreadsheets lately is StarOffice/OpenOffice, and that innovation is not in the spreadsheet itself, but in being open source and free.

As for Microsoft, the “innovation” in later versions of Excel for most of us is like the “innovation” of tail fins on late '50s cars.

This is why I’m a big fan of hybrid technology, and especially plug-in hybrid technology, even though its efficiency is over-hyped. The best thing about it is that it allows an orderly transition. By decoupling the drivetrain from the motive source, you can start building automobile architectures that can handle electricity, hydrogen, gasoline, biofuel, or whatever else.

If we can build cars where 80% of the hardware is the same regardless of your power source, and then we plug into that a small IC engine, or a fuel cell, or a larger battery for all-electric operation, then we can allow the market to sort out the best source of energy because we’re no longer tied to one source.

Our electrical grid does that for us. It abstracts away the actual form of the original energy, allowing us to smoothly switch in nuclear, solar, wind, or anything else without having to replace all the devices that consume power. Hybrid vehicles move us in a direction that will eventually allow the same thing for transportation.