Ray Kurzweil's 'Law of Accelerating Returns'

The Law of Accelerating Returns

Yesterday I had a meeting with Michael Vassar the President of the Singularity Institute where Ray Kurzweil is the Director. It was a real meeting of the minds and got my intellectual juices flowing in a way that is not a regular occurrence for me.

This article is very long and is basically like the Cliff’s Notes for ‘The Singularity is Near’.

I have to say, I am starting to become a true believer. I think the obsolescence of homo sapiens sapiens is going to come within my natural lifespan.

This stuff is all at the bleeding edge of my education and ability to understand what I am reading, so I am not going to claim my opinion of it should have any particular merit.

The reason I am starting this thread, is because I want to hear the naysayers. I have been seduced by the stories of crazy mad-science that seems like a game changer to me, and obviously the guy who runs the Singularity Institute is going to put some serious stock in this stuff. As I see it however, the advances in biotech, nanotech and AI are as such that barring some sort of catastrophic event or artificial bottleneck it will be a game changer, a paradigmatic shift and our ability to predict future advancement will indeed become more and more problematic.

So possible delays:

  1. Meteorite hits Earth and throws Civilization into Chaos so that Intel and other companies such as Intel can no longer operate. (Insert any sort of disaster you wish)
  2. China puts a moratorium on the export of Rare Earth Elements which they control the vast majority of, though I am unsure of what exists outside of China and whether having a majority proportionally means that they can dominate the market realistically.

So, what are the possible limiting factors that can throw a monkey-wrench into Kurzweil’s inspiring/terrifying prediction?

Here are his core assumptions in a convenient bullet point list for maximum ease of pedantry. :wink:

The potential wrinkle is that extrapolating a current trend into the future is always risky - especially extending exponential growth when it leads to a pretty wild result.

There were articles like this written in the 50’s about transportation. People had gone from horses to rockets in 50 years, and if you plotted the advancement of speed in the 20th century out to 2000, it looked like man would be going faster than light. And people seriously used that extrapolation to suggest that this would really happen. And the original fears of a ‘population bomb’ were based on the same kind of thinking, and turned out to be completely wrong.
But in fact, when a new technology is developed there’s often an exponential curve of advancement at the very beginning, but it flattens out as the low-hanging fruit is used up and further improvements become increasingly difficult. Moore’s law is already starting to look a little shaky when it comes to processor speed - the last doubling of speed took about five years instead of two, and now we’re running into fundamental limits and having to increase speed by increasing the number of processor cores because it’s getting harder to make the cores themselves faster.

And of course, another huge issue is software, and that is not improving at an exponential rate. Artificial intelligence in particular advances very slowly, and will probably get harder.

Is a singularity possible? Sure. Is it likely in our lifetimes? We really don’t know. Perhaps new breakthroughs will keep things moving at a roughly exponential pace for long enough to get us there. But I do know that just extrapolating current trends into the future is dangerous.

Sure, extrapolating future trends is always problematic, but Ray Kurzweil is also a dude who made mad money doing it.

While I am certain some of his predictions are optimistic, I don’t think it’s unfair to say that paradigmatic game-changing inventions are occurring all around us. Like he talks about the expansion of nodes on the internet and how in the 80s going from 20k to 80k didn’t seem significant but going from 20m to 80m did. Are we in the 1980s Genomically? Where genetic advancements are happening and we are aware of it, but the true explosion of advancement hasn’t occurred yet?

The merger of biological and non-biological intelligence could prove to be a sticking point. The Law of Accelerating Returns is all well and good when it’s applied to computational power, but biology is flesh and bone. Working up some kind of interface could prove problematic. We’re only beginning to understand how the brain works. Until we have a full understanding of the biochemical balances, as well as the way that neurons support things like memory and emotion, I don’t think that we’re going to be able to merge those intelligences.

That doesn’t mean that I don’t think that artificial intelligence itself won’t happen; it very well might. But I don’t think we’ll be able to merge ours with it for a very, very long time. I also think that, as a result, the artifical intelligence will be more foreign than we might expect. There would still be similarities–after all, the apple doesn’t fall from the tree–but I think the culture shock would be intense, to put it mildly.

Also, ETA that Sam Stone has it exactly right–past performance doesn’t guarantee future results.

Using numbers to compare computer chips from a year ago and a decade ago looks like it “proves” huge leaps in progress.

But people forget that human beings are not made of silicon, and you can’t define life by mathematics.

There’s a old saying that compares computers and people: Both a computer and a human can play a great game of chess…but the human is the only one who can enjoy winning.

As the previous post said…some things stop improving. Today’s aircraft and cars are not much better than 50 years ago.
And in some fields, no matter how fast a computer we invent, it wont help because we dont even know where to start asking questions. for example we still have essentially zero understanding of how our brains work.

And artificial intelligence is still pretty useless at translating human language. The classic example is the biblical phrase “the spirit is strong , but the flesh is weak”, which a computer translated as “the wine is good but the meat is spoiled”.*
Getting all excited about a singularity seems totally unrealistic to me. As great as the information age is, computers havent changed the human condition. Electricitity, refrigeration, and flush toilets have had a bigger impact.
*(yeah, I know it may be an urban legend, but it’s funny, and a reasonably accurate story to demonstrate that translation software just doesn’t work well)

Don’t get me wrong–I’m talking about transhumanism more than I am about artificial intelligence. The latter seems completely plausible to me, but the former feels more rooted in science fiction ATM.

Computer processing power is not the same as artificial intelligence. A fast computer is no closer to intelligence than a slow computer. So, you can’t use computer processing power estimates to determine the rate of progress in the field of artificial intelligence. Although it is a limiting factor, in that you probably do need that computing power to get the artificial intelligence to function fast enough.

Kurzweil seems to gloss over the fact that the hard work (not to trivialize the hardware side of things, just basing this on AI past history) is trying to figure out how to create the artificial intelligence, not how to build the computer that it will run on.

That’s a front on which huge leaps of progress have been made recently; not a year ago, I was immensely vowed by a computer being able to tell which 10x10 pixel character a person is looking at via fMRI, now they’re up to recreating arbitrary moving pictures from brain scans (not to mention fitting monkeys with robot arms).

The idea of the Singularity is a very appealing one to me, which means that scepticism is all the more called for. And there’s enough grounds for that – as others have mentioned, the assumption that new paradigms always get introduced as the old ones hit their ceiling is probably the most problematic one, and we know this can’t continue forever, as there are strict upper limits to computation (and likely practical ones orders of magnitude below those).

Another concern that I haven’t seen raised so far is whether it’s actually possible for any intelligence to construct an intelligence ‘greater’ than itself:

First, it’s not really clear (to me, at least) what ‘greater’ means in this context. Any computer can compute anything that is computable at all, as long as you’ve got enough money to keep buying new memory; you can’t construct a computer that can compute somehow ‘more’ than its precursors, it could at best do so faster. That would imply that an artificial intelligence embedded on such an architecture can at best be ‘more intelligent’ in the sense of being able to come to new ideas faster, not such that it would be able to come to ideas fundamentally inaccessible to ‘lower’ intelligences.

Second, if we get to developing an intelligence more advanced than ours, are we even going to be able to recognise it as such? How do we tell a profound statement nobody on Earth is smart enough to understand from gibberish? We’ve got a hard enough time distinguishing one from the other with our fellow humans (cases in point), how’s that gonna work with putatively much more advanced machine intelligences? It could be that the ultimate intelligence resides in the computer that just answers ‘mu’ to all questions, because it has realized some deep and fundamental truth hidden from us – or it could be that the program’s just buggy as hell.

Third, even if we succeed in creating an intelligence greater than ourselves, and even succeed in recognising it as such, who’s to say that intelligence would have any interest in creating an intelligence greater than itself, or to help humanity out in any way? Perhaps it would just figure everything out in the first second after having been switched on, print out ‘you guys are really fucked’, and then switch itself off again.

And fourth, who’s to say that intelligence would have the capability to construct an intelligence greater than itself? Perhaps the problem is simply not algorithmically solvable, or to create an intelligence even greater requires an intelligence we are not intelligent enough to create.
Additionally, I’m not actually sure I think Kurzweil has sufficiently demonstrated his hypothesized exponential development throughout all of history. A lot of things have not grown exponentially – from wiki, for instance, human creativity as measured through patents per person has declined. I think there might be a bit of selection bias at work here – sure, there are exponential trends present, but there are exponential trends damn near everywhere if you’re willing to give your fit a little leeway.

So, in summary, I’m interested, but as of yet unconvinced.

I think people are focusing on individual aspects rather than focusing on the whole.

Kurzweil is clearly not focusing merely on the hardware. This is a guy who has developed things in so many different fields that it’s ridiculous. From analog synthesizers to I believe he’s working on biotech now. I would find it hard to believe that he isn’t perfectly aware of where we stand on the software side of things.

His extrapolations are a bit different from what people are addressing here. It seems like people focus on his references to Moore’s Law because he puts that out front so that it can be grasped by the lay populace, but it’s hardly central or integral to his thesis.

His view of evolution is one where technology is an outgrowth of biological evolution. His tables start Pre-Cambrian and lead to now.

The basic theory is more specifically about advances facillitating an increase in their own rate of advancement. That there are plateaus is certainly a valid argument. He a addresses the bottleneck in Moore’s Law in the book by talking about a shift in the materials used. The Moore’s Law bottleneck is largely one of material, and advances in nanotech could reasonably surmount a lot of those problems just as implementing multiple cores has done. One of his more compelling arguments was about how in 1990 people were predicting it would take about 100 years to sequence the whole Genome, fifteen years later when he wrote the book, that was done.

There are a lot of things going on, cybernetic interfaces already exist, people are control robotic arms by routing nerve impulses into a machine. Intel is working on an interface to allow a person to change the channel on the television via thought. There are people working on being able to map specific thoughts in the mind via neural imaging.

Given all this I think his point of exponential growth is a strong one. Sure if we look at AI advancement linearly the problems are a long way out, but a lot of that advancement is going to come from neural imaging being ported to AI. The simulation of a cat’s brain in a computer for instance.

A friend of mine used to use an EEG helmet as a midi controller. So really, mind-machine interfaces already exist. Hell, we are communicating via a mind-machine interface. So the extrapolation that is incorrect is that mind-machine interfaces are a long way coming, they aren’t a long way coming they are existing technology. Your monitor and your keyboard and your mouse are mind-machine interfaces. As it is already possible to wire the PNS to trigger externalized functions it already exists. If they can map specific thought via neural imaging, then you can start transmitting more complex commands.

What’s interesting about all of this is the way all of these technologies are improving each other.

Here are the benchmarks for singularity as I see it.

  1. Hardware: Processing Power being faster than the computational capacity of the brain.
  2. Software: AI of sufficient complexity to solve problems autonomously.
  3. Mind-Machine interfaces: More advanced input devices
  4. Neural Mapping: Understanding precisely what regions of the brain do what so that they can be mimicked or replaced.
  5. True Molecular machines, IE, proteins that act as assemblers and compilers to build structures.

In his book the singularity is near he talks about all the criticisms and his rebuttals to them in the last hundred or so pages.

He also talks about how manmade disasters (cold war, WW2, great depression) didn’t seem to slow or stop progression of the exponential trends in science that he has been seeing for the last 100+ years.

A meteorite could put the brakes on it (for a while), but one isn’t expected to hit anytime soon. We are supposed to get a close call in the next 20-60 years, but the odds of that one hitting are extremely low, a fraction of 1%. And within the next few decades we may have working tools that can push meteors away from earth. There are already several designs for meteroite defense systems, but they are not being built or funded yet.

In my view, once human intelligence and the g factor (creativity, pattern recognition, innovation, comprehension, working memory, long term memory) are no longer limited by biology then everything will take off. That is more or less the singularity point IMO. I have no idea when that’ll happen but I assume in my lifetime sometime since I will be in my 80s in the 2060s.

Sam Stone’s criticism of the low hanging fruit its good, but Kurzweil addressed it with computing power. He claims exponential curves follow more a succession of s-curves with flat growth, rapid growth, then flat growth. The exponential growth in transistors is the 5th exponential curve for computing power according to kurzweil with older technologies (vacuum tubes, relays). He predicts a 6th technology soon as transistors reach their physical limits.

A problem I have with his predictions on AI and transhumanism is I do not know if we even know enough about the brain to determine if we can model it with a computer of X capacity. I do know there have been computer models of various parts of human and animal brains however.

Also a criticism of Kurzweil is he only predicts where our capacities will be in the future. He predicts how much processing power, RAM, drive capacity, base pair readings, etc we will have. But extrapoling that to determine what we will do with them is hard. The playstation 3 has the processing power of the most powerful supercomputer on earth did back in roughly 1993. Even if you could make a prediction that a gaming system would have the processing power of the world’s fastest supercomputer in 15 years that doesn’t mean you know what it’ll be used for. We probably use more computing power for gaming graphics in 2009 than were used in all medical and scientific research combined in 1998.

So my point is I agree with Kurzweil’s trend lines. However I do not know if you can extrapolate that and say ‘we will use that scientific capacity to achieve X’.

He has written older books, check out ‘the age of spiritual machines’ which made predictions for 1999, 2009, 2019, 2029.

Some of his predictions for 2009 are personal computers implanted in rings, translational software being everywhere, people using speech recognition rather than typing. For some of these issues we have the technology to achieve these things in 2009, but nobody really wants them. We have speech recognition software. And a person probably could get a computer in a ring. But most people aren’t interested.

Other predictions that happened or are happening include using steady state memory more and more, wireless communication, most phones being wireless.

So hloking at his predictions for 2009 many of those predictions are happening but they are in the early stages (telemedicine, translational software, steady state data storage, self driving cars).

So Kurzweil’s predictions don’t always happen because there is no incentive to make them ahppen (who needs a computer in a ring) or they are in the early stages when he feels they should be mature.

Half Man Half Wit Number of Patents is a completely useless and absurd measure of human creativity. Also, you have to understand what Kurzweil is meaning by greater intelligences. Basically, if we can create an intelligence that is comparable to a human mind, but operates orders of magnitude more quickly, then that’s an intelligence greater than a human’s.

Wesley Clark He actually addresses your criticism in the very beginning of the book. He says that one of the hardest parts about being an inventor is predicting the timeliness of an invention. So you argument is regarding social adoption of those advances, which is different from the point he is making about advances in general. He kept that particular criticism in mind and made a point of making it explicit early on.

Said in dramatic Patrick Stewart voice :

I am Lotus 123…bow before me !

IIRC about half of all patents applied for are granted…which tells me most patents are probably BS and the number of patents means little as a measure of anything other than a bunch of people with plenty of time and a little bit of money.

It seems to me that predictions like this fail to take into account the possibility that there are real-world speed limits. We might, for example, point out that for thousands of years, we could propel our vehicles as fast as a horse could run. Then came engines, then flight, and then spaceflight. So if you were to graph “How fast can we go” it might well seem that the curve is trending ever-upward.

However, it may well be that the universe imposes a limit of 186,000 miles per second or so. Reaching that point will be extraordinary, but it’s a hard stop after that.

Another example: he uses the story of the grains of rice on a chessboard to illustrate how dramatically exponential growth works. I’d say it also illustrates something else: if we’re talking about real-world grains of rice, it’s impossible for the planet to produce enough rice paddies to achieve 2[sup]64[/sup] grains in one harvest. That would be an example of a hard-line limit imposed by the laws of physics.

I think it sounds like the level of analysis you’d get from a bunch of people smoking a bong: “cool, it’s like we could be an electron circling a nucleus in another Universe”.

First, I don’t think it’s possible to quantify progress so I that makes any statement about progress becoming incrementally faster impossible to justify. Secondly, at some point there is always a diminishing return. Raw computing speed is 10 times faster than it was a few years ago, but it doesn’t mean we can do things on the computer 10x faster. The apparent performance of Word or web browsing is just a little better. The speed of light will be the limiting factor for how quickly we can get back answers from Google, no matter how fast the servers become. Soon virtually everyone in the world will be within easy access of the web, so there will be no exponential increases in connectivity.

My grandfather was born before the automobile, the airplane, or the computer and lived to see man land on the moon. I think that was just as big a change as what I have seen in my lifetime.

Because we are so technologically advanced, a new discovery needs to be even greater to make a difference. It may have taken thousand of years for a written language to develop but the changes that resulted from that were enormous. It only took a hundred years from Babbage to modern computing, but that was not as big a change in the world as the written language.

I’m not a Luddite, I just am wary of people predicting the future. It’s easy to do because you can’t know if it s correct until the books are sold and the lectures performed.

Here’s some ice water for you. I went to a conference a few years back where the keynote speaker was Alan Kay His speech was basically the opposite of Kurzweil’s view. If you ask Kurzweil, science drags humanity along with it as it rockets humankind forward. If you ask Kay, humanity drags science down into mediocrity and half-assedness.

I spent an hour or so chatting with Kay after his speech, and we talked about a bunch of stuff. He said that if he ever wants to write a book he’ll call it “My Time at Xerox: The Trillions They Didn’t Want” They had graphical user interafces, real-time videoconferencing, object oriented programming, all kinds of stuff which didn’t go mainstream for another thirty years or more, and some we still don’t have. His main thesis was that you can envision a future which is great, but the devil is in the details because vision is singular, but humanity is plural. When you translate vision to humanity, you lose the vision and get some sort of half-way implementation which doesn’t really live up to the potential.

Will SOME people get machine/brain interfaces? Probably, to a limited extent, but will they be ubiquitous? I don’t expect it in my lifetime. There was a time when, following the eradication of Smallpox, we felt we could dominate the natural world. Now we’re seeing that the flu may be able to wipe us out even today. The rate of progress of human knowledge far outpaces the rate of progress of humanity. We need those visionaries on the bleeding edge, they do great things. But don’t mistake their vision for reality.

Enjoy,
Steven

I’ve read some of Kurzweil’s material, and my main issue is that he uses repetition in what seems to me a misguided attempt to sound novel and/or intelligent. He is particularly fond of the word “exponential.” He has very few data points to back up his assertion; even if he did, his conclusion seems eminently obvious to me. Yes, I realize the dramatic strides computing power has made in the last three decades. Why does he parrot this so enthusiastically, as if to say that he’s the only one noticing this phenomenon?

Also a gripe of mine is his claim the the exponential growth is growing exponentially. By definition, an exponential function does grow exponentially. That is to say, d/dx(e^x) is e^x. So why is this the extraordinary claim he makes it out to be?

Right, and there was also a trend in the past ten years or so where actuaries told their companies that they need to ‘have more patents’, as such all kinds of things were patented and it lead to patenting things such as ‘wireless networking’, and lawsuits where someone claimed the owned any and all wireless networking platforms.

I’d be willing to bet that the vast majority of patents these days are from corporations looking to beef up their portfolio to make the stocks more attractive. “Look we have 4000 Patents! More than any company in our industry!”

This is a pedestrian mistake, Ray Kurzweil is not making it.

Sure, and that’s why it’s likely that the inventor of chess was beheaded.

But look at Wesley Clark’s example above, Kurzweil addresses the plateau issue regarding hard physical limits.