There is no "Curve" in scientific progress.

Here’s an article in the Washington post about the so-called “Curve” in technological progress (actually, the standard book review that is so detailed that you never actually have to read the book). Here’s a quote (from the article, not the book):

This idea is really stupid. First, let’s deconstruct Moore’s law. Computing power has doubled as it has because it started from zero just 60 years ago. It is not too hard to double the power of, say, 144 transistors. What tech boosters don’t tell you is that high-end chip plants also double in cost at about the same pace. Last time I checked (I worked in the semiconductor equipment industry for about 2 years), a high-end chip plant cost about $3B, and there are only about 10 companies in the world that can afford to create such facilities. I’m sure the numbers have grown higher in the last year.

Creating new devices has become monstrously expensive. A single mask used to create just a single layer on for example an Intel Pentium chip can cost millions of dollars, and many such masks are required to create a chip. And that is just one tool in one of hundreds of processes needed to create a working chip.

The process also stretches human organization to its limits. These chips are the most complicated and advanced things humans have ever created, and there are only a handful of people who truly understand how the entire system is put together and works. No single person fully understands every process that goes into their making.

We are about about to hit two walls when it comes to chip-making. The first is that, geometrically speaking, the circuits will soon be so small that electrons will freely be able to tunnel (i.e., the damn walls of the circuit will be too thin, even if we have the technology to make them, which is far from certain). We are only a few orders of magnitude away from this happening (perhaps a more knowledgable poster could nail the details for me). Second, creating those devices will become orders-of-magnitude more expensive.

And, when we do finally make chips that are 1,024 times faster than they are now (10 more doublings, and with 18 months per doubling that is 15 years away), we will still only add three digits to our processing power and will presumably be running Microsoft Windows. Or is there any evidence to the contrary? (BTW, Moore’s Law never really stated that computing power itself doubles. It had to do with chip complexity. Cite.)

One of the big gurus of this “we’re at the knee of the logarhythmic curve” talk was Ray Kurzweil, much more quiet of late than when his last big book hit the shelves in 1999: The Age of Spiritual Machines. I read it and thought it was neat-o. In those giddy, gung ho Internet Bubble days, anything seemed possible. He predicted big changes even by 2009, when all manner of chips would be making our lives more convenient, blah blah. Well, it’s 2005, six years later, and we’ve only seen incremental change.

Implicit in the linked article and explicit in Kurzweil’s theorizing is the notion that, although we humans are too dumb to do it, we’ll invent intelligent machines in the near future that will carry us into the next technological era. But the reality is that strong AI has made essentially zero progress in accomplishing its goals. We’ve created some great chess programs, some pretty powerful expert systems even (note: at great cost in money and manpower), but we have yet to build even the slightest machine that can take over the hard work of thinking things over and creating new machines (which is what the model requires). The problem isn’t processing power. Rather, we have simply no idea how to write the program for such a machine.

The common sense model that reflects actual history.

First, nowhere in history has logarhythmic change been observable. To the contrary, new technologies typically require a great deal of support and maintenance. It has been speculated that only recently have computers added to the overall productivity of the economy. The reason is that, while they have added value over the last 50 years, the amount of labor and capital required to create, maintain, program, and use them has put their overall value in the negative.

I have little doubt that the transaction is now of postive value, but the fact remains that computers add incremental, not revolutionary, value to the economy. And so it goes with most technologies. Extraordinary tech requires extraordinary support. We reap some rewards from the trade, but we do not progress logarhythmically.

I think there are two main types of inventions: those based on knoweldge, and those based on making the hard-to-make. Knowledge-based inventions can spark true revolutions, since their technology is not difficult to create or maintain. The steam engine is a good example. When we eventually figure out how to do fusion, it will be the same thing: free, clean, energy for all. But the computer chip is a much different beast: Figuring out how to make them is extremely difficult, and making them is extremely difficult. Finally, writing programs to use on them is also difficult and time-consuming.

So here’s my common-sense view: New technology pushes along in fits and spurts, and constant new invention is required for progress. Nothing is guaranteed or automatic, and even regression is common. The Space Shuttle is crappy, faulty 1960s technology based on a faulty 1960s design. It is far more complicated tech than the Apollo program, but it is not better tech. Far from representing logarithmic change, it’s been a dead end.

We have been stuck in a Windows mess now since the early 1990s. The Internet is a great thing, revolutionary in its own way (more for culture and business than tech itself), but where is the computer revolution? Mind-blowing virtual reality, speach-recognition, AI that does half the work for you, and flying electric cars? Well, wait a hundred more years and maybe it will happen.

And, just as a general observation, I would say the pace of technological change was much faster in the 19th century than in the 20th. You pretty much started with nothing and ended with the basics of today’s technology and someone not to far from our current view of the universe. The 20th was a century of highly important but incremental change (and, of course, true revolutions in select areas).

So, no logarithmic change is in store for us. Any Dopers disagree?

I don’t really know enough about computers to comment on the specifics of the post so I’ll take you on your word. It’s certainly an interesting counter-view to the one so often pushed by those tyring to sell the ‘information revolution’. However I would like to clarify if not actually disagree with the idea that general technology isn’t progressing logarithmically.
The big confounding factor in here is that technology isn’t linear, it’s a network. As an example, the technology of basic gears-and-bolts engineering proceeded at an approximately logarithmic rate following the reformation, but it ultimately stalled because, well, we knew everything about basic gears ‘n bolts. But that expansion gave us movable type. Movable type itself allowed rapid expansions in other totally unrelated areas of technology, such as economics and steam power. Steam power and economics progressed logarithmically for a while and then stalled, but not before they had led us to industrial metallurgy, internal combustion power and mass production. Industrial metallurgy, internal combustion power and mass production progressed logarithmically for a while but before they stalled they led us to flight and industrial chemistry. And so on and so forth.

That’s been the history of the last 400 years or so. Scientific progress has followed a series of logarithmic curves, each one built on a previous apparently unrelated curve. So although you’re probably correct that any narrowly defined technology only follows a logarithmic progression in its infancy, that’s not true of scientific progress as a whole.

So while it’s probably true that microprocessor technology will stall sometime in the next 2 decades they have already allowed massive breakthroughs in totally unrelated fields of technology and will doubtless allow many more. For example we can now run virtual chemical reactions and see the substances interact as well the results of any tinkering without having to waste gallons of expensive reagents or building hazardous materials labs. We can do in a month by computer what took years with al team of lab monkeys only a few decades ago. We can plot the manufacture of synthetic DNA for drug production in a week that would have taken years in the 70s and before microprocessor technology stall s we’ll be able to get at least a basic understanding of how the drug reacts with a virtual physiology. And that’s just the field of medicine. The microprocessor revolution has led to or is enabling equal breakthroughs in almost every field of science. From sifting through reams of telescope data in astronomy to modelling complex interactions in ecology. In reality of the microprocessor revolution ended right now it has already started a new logarithmic growth phase in all those fields, and that will in turn prompt the same growth in yet more fields.

And the other point is that the network itself seems to be logarithmic. If the logarithmic growth in steam technology prompted growth in land transport and manufacture then the growth in those two things brought about logarithmic growth in geology, and communications, and agriculture, and medicine, and flight, and chemistry. And those things in turn prompted the logarithmic growth in a dozen other fields including microprocessors.

So while any specific field like microprocessor technology may be destined to a classic sigmoidal curve ending in a plateau, technology as a whole really does seem, to me at least, to be growing exponentially as people graft the fruits of the growth phase of one technology onto another apparently unrelated field. Electronics in its infancy may have seemed fairly unrelated to agriculture and medicine, but electronics is precisely what has allowed genetic engineering to become a reality and that is likely to prompt an exponential growth in medicine and agriculture.

There is a curve in scientific progress as a whole, although tightlly defined fields like “microprocessor technology” or “stationary engine technology” may stall from time time to time… until someone eventually adapts some other seemingly unrelated technology like nuclear physics to them and they leap forward once again. And I have no doubt that 50 years or 100 years after microprocessor technology stalls someone will adapt a seemingly unrelated technology like economics or geology to them and we’ll see yet another exponential leap. Just as we have for al other technologies.

Certainly the curve exists and you have no basis to call the idea “stupid”. Your argument consists of using examples which seem to have topped out (though I disagree that computer tech has run into an unsurmountable barrier) but conveniently ignored genetic engineering, which promises huge and rapid advances.

Your derision has no basis and your post shows no insight.

I don’t think it ignores it.

GE is simply a differnt eample of where "power has doubled as it has because it started from zero just 60 years ago. " In the case of GE we started from zero a mere forty years ago and today we have technology that uses a little less agricutural pecticides and produces a few hundred drugs, most of them for fairly uncommon diseases or of moderate impact, being life prolonging or to treat chronic illness.

GE may promise a lot, but it’s only able to be described as exponential growth because it started fom a recent zero point. In computer terms we’ve gone from zero to a pocket calculator. We don’t even have anything that is having any personal impact on most people yet. IOW we aren;t even at the stage of the first personal computers. In that repsect the point stands, or at least hasn’t been succesfully challenged, the exponential growth is only evident in the very early years.

There’s no reason to believe GE won’t rapidly stall, or even that it hasn’t already stalled, after all it’s been 35 years since insulin, 25 years since BT crops, and over 10 years since Dolly, and we’ve had no revolutionary advances or practical applications since, just tinkering.

“…the untamable force of exponential growth that propels technological progress.”

Growth propels progress? Growth of what, technology? Technology propels itself? I guess some people still believe in perpetual motion.

Sure, today’s advances can multiply the effectiveness of future technological investments. But fundamentally, what propels technological progress is a massive, intergenerational investment in research, coupled with a society that both demands and rewards technological advances. You can’t get something for nothing. Yet there are people out there who assume that Americans will be at the crest of whatever technological wave comes along, just naturally, or as a matter birthright, or by magic, regardless of how crappy our schools become or how much of our capital we ship overseas.

When was it dictated that “Science”(or more euphamistically, “high tech”) was equivalent with “computers”?

If you don’t like Kurzweil, you’re gonna hate Vernor Vinge.

My feeling is that Vinge and Kurzweil are probably “ahead of the curve” as it were, but that the rate of techno progress has increased steadily in my lifetime, and will increase at a faster rate after I die, simply because I won’t be around to keep things stupid any more. But also because computers are pushing the rate of invention and innovation and will continue to do so. They are the best tool a scientist and/or engineer ever had for gathering information, sorting data, etc. The human genome project wouldn’t have been possible without computers, for example. But I think over time, they will excel at providing the innovators with the information and computing power they need to come up with new things under the sun.

Most of the reasons we’ve only seemed to make “incremental progress” is that not all scientific progress is quantifiable. The research into flatworm neurology may be a breakthrough for Alzheimer’s three years from now, but for now it’s just a file somewhere on the Internet, waiting to be read by the right person.

I think advances in the speed of computers are all well and good, but we have far from optimized our usage of them. A fully optimized human/computer interface could have implications for technology as radical as the AI assumption.

Just to condense what everyone else is pretty much saying, computers are a ‘force multiplier’, i.e. they enhance OTHER fields and make them perform higher than they would have without them. I don’t think you can simply look at computers and say that because they will (at some unspecified future date) level off in their capabilities and advancement that this means everything will. You have to look at what impact computers (and other advances) have on other fields…its a rippling effect. From my perspective the speed and level of advancement of engineering and science has been increasing rapidly for the last few hundred years…and I’ve seen no evidence that this is not continueing today and won’t continue into the future.

-XT

I’m confused. People seem to be using the terms “exponential” and “logarithmic” interchangeably. The functions exp and log are inverses of each other. Exponential growth is fast; logarithmic growth is slow.

I tend to agree that the rate of increase in knowledge cannot continue to increase at an exponential rate. We’re on a steep slope now because we are the recipients of a technological boom, and science hasn’t caught up to it. When the gas engine was developed, it made possible the airplane, and within a few years there were dozens of different kinds flying. Automobiles gave the country mobility and efficiency. That in turn led to an explosion in wealth that allowed society to build all kinds of new scientific instruments. Knowledge expanded quickly because our ability to detect the universe expanded quickly. Particle accelerators, large telescopes then space-based telescopes, lasers, computers, you name it. Every year we’ve had the ability to do new tests and look at things in ways we hadn’t before.

But there’s nothing to say this will continue. I remember reading breathless articles in the 1970’s that had lovely exponential curves showing mankind’s maximum speed over its history - graphs that were used to ‘prove’ that we would have faster-than-light travel before the year 2000. Didn’t happen, because we ran into new limits.

We’ve still got a few generations to go in computers before Moore’s law starts to collapse. We’ve still got a whole list of new instruments coming into play in the next few decades. So I expect we’ll see wonders. But not indefinitely. At least not at the same rate.

In EC’s linked article, we find the following:

That is the essence of the argument, and that’s why computers are so important in it.

Basically I’m saying this isn’t going to happen anytime soon. Strong AI hasn’t even taken a baby step yet. That is specifically what the OP is about, and more generally that technology doesn’t just come with one big explosion, inevitably approaching the maximum possible in short order.

Yes, I think you’re right. Thanks for pointing that out!

Indeed. Some pictures for clarification.
logarithmic
exponential

As a direct rebuttal to the statement “This idea is really stupid”, the OP is misguided. Here’s some thoughts on why:
[ul]
[li]Moore’s law is an empirical observation, not really a prediction (at least, not beyond 10 or 20 years, which it has obviously exceeded). But that’s besides the point. While there are indeed physical limitations that are now coming into play, thus far scientists (or engineers) have always found a way to skirt them. I grant you that there is an upper bound; claiming that we’ve reached it is almost as ignorant as claiming there is no limit.[/li][li]Yes, the expense of manufacturing facilities for producing new computer chips has increased. I believe the cost of computer chips has decreased on the whole. I might ask how much the computer you’re currently sitting behind cost? How much did the first computer you bought cost (assuming, of course, you had previously purchased a computer)? I remember buying a 2GB hard disk in 1995 for $300; now I can get one 100x larger for half the price. But why limit this to computer equipment? The expense of producing a host of other items has decreased just as much. Name a consumer product (clothes? pens? radios? furniture?), and I’d think it is now cheaper to buy/produce than it was at some prior time not so long ago, and cheaper than that at a prior time.[/li][li]Your claims about AI are faulty. I’m not saying that Kurzweil is correct in claiming intelligent machines by 2009 (or whatever year he puts on it). Minsky, Turing, and others have also made such obviously incorrect predictions, and have subsequently been proven wrong. Predictions of this sort are foolhardy; akin to my first point, this type of prediction is as faulty as your claim that “strong AI has made essentially zero progress in accomplishing its goals.” Great strides have been and are being made in various areas of AI (and related fields); unless you are intimately involved in one, I feel safe in calling bullshit.[/li][li]When speaking about “scientific progress”, it is wrong to consider any single particular field. Rather than the design of integrated circuits, one could just as easily consider typesetting technology. Has the progress of printing tech increased or decreased since Gutenberg? How about in the past 25 years?[/li][/ul]
I was mentioning to an office mate o’ mine how remarkable Ritchie and Thompson’s The UNIX Time-sharing System was; clear, concise, elegant to the point of beauty. His remark was one of agreement, while saying that it also seemed to be “low-hanging fruit”. It seems to me that much of your criticism can be characterized as such; of course development in a previously non-existant field will not only be rapid, but more rapid than in an established field. The same can be said for the time just prior to a breakthrough in a field. One might criticize mathematicians on similar grounds for not tweaking Euclid’s fifth axiom sooner. I fail to see how that charaterizes a general slowing or lack of progress in science or technology.

One might argue along the lines of Huebner that the rate is slowing (why on earth does he factor population into it?). Perhaps, perhaps not. However, it is important to be clear about what exactly is on the exponential curve here; constant acceleration (greater than zero, of course) implies a linear increase in velocity, which yields an exponential increase in total distance covered. Personally, I am of the opinion (and it is admittedly an opinion) that technological “acceleration” may well be near zero, but I don’t think it’s declining. Correspondingly, I think the sheer amount of new technology is indeed on an exponential curve.

Well yes, but that’s making the really rather dubious presumption that progress in one narrow area equates to human progress as a whole. Just because some people made up a stupid metric, came to a stupid conclusion and were proven unsurprisingly wrong (according to limits known nearly a century before their “prediction” failed) doesn’t prove anything about the rapidity or otherwise of broader human progress.

If the question is: “can we expect to see infinite and exponential progress in any given field?”, then the answer is assuredly no. If it’s the much harder to nail down “is human scientific progress exponential?”, I think it’s much less clear. There undoubtedly has been an explosion of knowledge generation over the past 500 years, and while I’d be hesitant to describe it with anything so precise as the term “exponential” (given that it’s pretty hard to quantify “knowledge”), I certainly don’t accept that we’re reaching any sort of limitation on the things that we can know.

Just to clarify, Vinge didn’t claim that an “ultraintelligent” machine would be built by the end of the twentieth century. The quotation you gave was actually of him quoting I. J. Good, who was, in fairness, writing in 1965.

Seems to me if we’re going to quote people who grossly overestimated technological progress, we should also quote people who grossy underestimated it, as well, including people who thought a dozen or so UNIVACs would be enough for the entire U.S., that 640 kilobytes of memory would always be sufficient and that man would never need or want a locomotive that could travel faster than the dizzying rate of 15 miles per hour.

Just to be fair, of course.

Right, it’s not a law, either, but people treat it as such.

It would be ignorant, but that’s not what I claimed. I said we’d hit it soon. And we will. Soon being, who knows, 50 years or so? But maybe we’ll have quantum computing or something new by then. In any case, we’re reaching the limit with standard digital tech.

A hard disc isn’t a computer chip, is it? In any case, once you develop the chip and fire up the production line, the incremental cost of making a chip is rather low. Not virtually zero, as with drugs, but still low. We could keep on building Pentiums until Kingdom Come with the same tooling. But to design something just twice as good and fast as a Pentium requires an enormous investment.

Also, I have heard that typical MS applications like Word and Excel contain thousands of man-years’ worth of programming (can anyone confirm that?). That is, if you wanted to write the code by yourself, it would take you thousands of years! That is really pretty scary, when you think about it: All that labor and investment just to create MS Turd (what if this program were the Singuarity mentioned by Vinge–wouldn’t that be a disappointment?)

That’s a fine point to make but is not a rebuttal of what I wrote. I agree that technology is moving forward, sometimes making big jumps. I deny, however, that scientific progress is exponential.

You just said “various areas of AI,” but I specifically said, “strong AI.” We’ve done lots of incredible things in AI, but we haven’t made any progress in strong AI.

You’re not getting my argument. I’m saying there is no exponential curve, especially not one based on strong AI (i.e., machines that “wake up” and start doing our work for us). The heart of my argument is that extraordinary tech requires extraordinary support–capital and labor, which is a natural stop on exponential progress.

Your friend is right, however, about low-hanging fruit. The Romans could have produced printed documents had just one person come up with the idea of moveable type. The supporting technology was already there (e.g., they could have easily cast the type, etc.). But they didn’t do it. Once someone came up with the idea, it was simple. The same thing with the paper clip. That could easily have been invented in 1650, but no one thought of it until the late 19th century.

Such inventions tend to deliver most of their benefits all at once. You point out that printing technology has improved in the last 25 years. Sure, it’s great having a bubble jet on your desk; very convenient. But that is just a small increment in progress compared to the invention of the printing press in the 1400s (or in China in 1040 or so).

I don’t think your analysis here makes sense, since the exponential curve (according to the Kurzweils, etc.) refers concretely to computing power and metaphorically to overall tech progress. Except in a short of intuitive, fuzzy-logic type way, I don’t see how one can measure the exact rate of tech progress.

Another point is that sometimes a lot of new technology is developed that has only a small effect on society. It may indeed pay off to do it for the company involved but the result is a whoop-dee-doo. For example, apparently there is a heck of a lot of money and research poured into razor blades: coatings, ways of treating and hardening the metal, etc. So you may have a ton of tech in there but still a product that is only a moderate improvement over a 1950s safety razor.

BTW, I am not a Luddite or someone who doesn’t like technology (although the idea of robots replacing humans does not seem like a good one to me). I think our next big tech revolution will come from the invention of cheap energy from something like fusion, allowing us to solve a lot of problems in a brute force manner (e.g., if global warming is a problem, then invent a machine to suck carbon out of the air, energy costs be damned).

Two asides:
I suggest eliminating what I perceive as a potential for “moving the goalposts” by speaking only of technology, for today’s “extraordinary tech” is passe tomorrow. Addressing the above quote, could you answer the question: Why are captial and labor natural stops (I’ll take that to mean “necessarily place limits”) on exponential progress if two aspects of said progress is making things cheaper (i.e, requiring less captial) with reduced effort (i.e., requiring less labor)?

Now, I believe I am getting your argument; my points were meant simply to show that your criteria is bad. (In other words, I’m counter arguing, not advancing my own beliefs as argumentive fodder, for they are nothing beyond unsupported assertions.) As far as I can tell, your argument concerns the following points:
[ol]
[li]an unsupported claim regarding the timeframe of a limit on increasing computing power[/li][li]the cost of production[/li][li]an artifical declaration about what qualifies as progress in AI[/li][li]the selection of particular examples as indicators of overall progress[/li][/ol]

At any rate, I think what bothers me most about your argument is the cherry-picking and tailoring of supporting evidence. (Quotations selected for each of the above items.) To wit:

First, the specified time frame. If you’re going to use the predictive power of Moore’s “law” as evidence, accept that it was only meant to hold true for 10 years and use that span. You seem to conviently choose your time frame according to your purpose (the entire span of technological development, 19th vs. 20th century, 50 years for computing, 20 years for computer applications.) Second, why would you make a general statement about computing power and then limit it to “standard digital tech”?

No, it isn’t. But why limit computer tech specifically to chips? The point is that costs have decreased.

And Linus Torvalds wrote much of the core of Linux in a semester (as known to those of you following the SCO debacle; granted, tremendous amounts of man-hours have subsequently been put into its development). Plenty of people have written their own text editors. The fact that MS Office is a bloated piece of crap doesn’t make your case. And again, why reference the production of a single person when speaking about the whole of scientific/technological progress? Y’know, science “stands on the shoulders of giants” and all that.

Ah yes…until we have a fully functional, sentient machine, we’ve made no progress. Is there anything, in your opinion, that might qualify as actual progress in strong AI? Or is it (potentially) another example of a field in which some defining event will (may) take place, heralding a blizzard of progress, which will (may) then taper off and be another example to support your argument?

But it’s the last that I think is the crux of my disagreement (abbreviated mercilessly):

Your claim was that “scientific progress” is not exponential. And yet, your evidence is always limited to a particular example (be it computer chips, razor blades, or strong AI). First, you make the mistake of declaring that one invention/innovation (the one that sparks a technological “jump”) somehow overshadows any and all resultant discoveries. Second, how does one separate a flurry of invention/innovation in one field from the effects in others? And, unrelatedly, where does an “effect on society” have any bearing on technological progress?

Essentially, it seems to me, you destruct your own argument:

Perhaps you’d like to supply a metric (any metric, not necessarily exact!) for technological progress. I think you cannot (not that I can, I should point out). And if that’s the case, then we don’t really have any grounds on which to continue.

I think the OP has a point that merely to maintain our current levels of science and technology requires an enormous investment of resources. Before a scientist or engineer can make a new discovery they have to spend long years learning already existing science. And we can therefore imagine that at some point scientists and engineers won’t have any time to come up with new discoveries since they’ll be dead of old age before they’ve learned enough to find something new. It will therefore always be easier to look up the results of research done in the past rather than conduct new research or invent new technology.

However, scientific advances don’t just add more “stuff” to know, they also simplify knowledge, they exchange vast catalogues of anecdotes with simple laws and principles. To invent an example, think of the amount of knowledge a doctor today needs to treat diabetes…symptoms, medication, monitoring, all over the lifetime of the patient. Curing diabetes might be more complex in some senses, but it drastically reduces the complexity of the problem…doctors no longer have to learn to manage a complex disease state but rather turn their patient over to a specialist who cures them with a magic pill. Another example is the substitution of computers for other library tools…card catalogs and scientific abstract reprints and copies of journals. Sure a computer is much more complex than a card catalog, but it makes running a library much simpler and easier.

In other words, a complex machine or process can make a complex problem simpler. Yes, you must add in the maintence costs for the process. But the more complex the problem is the greater the savings…throw out your tables of logrithms and trigonometric functions and replace with an electronic calculator cheap enough to give away in cereal box. This happens all the time.