Ray Kurzweil's 'Law of Accelerating Returns'

Kurzweil’s argument is that computing power and biological fields like brain scanning both grow at exponential rates.

So according to Kurzweil we should have a detailed understanding of the brain (or at the very least a detailed understanding of higher cognition) by the 2020s. He also feels it takes 2*10^16 calculations per second to mimic the brain. I can’t remember exactly how he got that calculation (I think it was 100 billion neurons x 1000 connections each x 200 firings per second). Right now a PS3 can supposedly do 10^11 cps.

So Kurzweil’s argument is that the neuroscience to comprehend intelligence and the computing power necessary to simulate the brain should both appear in the late 2020s because both follow exponential trends. After they appear and we have software which can provide the tools of cognition, after we see higher speeds in computing power it translates into better and faster artificial cognition.

Genetic Algorithms also have the ability to solve problems, but they use the tools of natural selection instead of cognition. And we already have the tools of GA. If strong AI can mimic human cognition and GA can mimic evolution, then both should be able to provide us with solutions to problems. However my understanding of GA is that even though it works, it really isn’t groundbreaking.

So my point is that there are at least 2 channels of creative problem solving (evolution via natural selection vs cognition based on pattern recognition, working memory, crystallized intelligence, etc). And many people (myself included) feel that after cognition becomes possible via computers it will change everything. But we already have problem solving via Genetic Algorithms and it hasn’t changed much.

People often underestimate how fast exponentials grow, but you also have to be careful not to overestimate them. I would agree that technological progress is, in general, exponential, but one essential feature of an exponential growth is that it never has singularities. Yes, we’ve made advances in technology in the past century that would be completely incomprehensible to anyone from a previous century, and so one might be forgiven for naively thinking that we’re nearing the end of all possible advancement. But that statements has always been true, for any century. Every generation in history has always thought that their time was special, and it’s never been true. I see no reason to expect that it’ll be true this time, either.

My point is that neither of those tell us anything about why the particular structure of our brain results in the higher functions that we have.

We could have a perfect schematic of a bee brain today and we would not really be closer to understanding it. We are making progress, but it seems that Kurzweil is just taking a stab at the ai thing without anything to back it up.

Yes, but does Kurzweil have any data to support this?

Computing power does not equal understanding our brain functions.

Whether his calculation is correct, or if he’s off by an order of magnitude or 2, I would not disagree that we will get there regarding computing power.

But having the computer doesn’t mean you have the software.

That analogy doesn’t work at all. For one thing; humans, unlike evolution are actually trying. Evolution isn’t going to “try” to maximize computational ability or anything else. If there’s no immediate Darwinian advantage for a species in evolving more brains or waling upright, it won’t regardless of whether or not the laws of physics would allow it to do so.

The analogy also fails because you are portraying as failures things that were not. It’s not a “failure” that, say, spider monkeys or chimps never developed human intelligence; evolutionary success is not a synonym for “similar to humans”. We aren’t any “more evolved” than they are; we just evolved in a different direction. I’d make a poor spider monkey.

Except the human body doesn’t even come close to approaching any of those limits. All it has to do is outperform the human mind. So you are setting arbitrary limits that are way beyond the threshold necessary to create an AI.

I don’t think that’s exactly how evolution works. We are an individual species in an ecosystem. The ecosystem evolved with us, we are not separate and distinct from it. If all the other species evolved at the level of the human being then we’d not be able to survive unless we ate other sentient life-forms.

Well his book is a lot more robust than just that article. I’d recommend it if you would like to see it go further. Wesley Clark seems to have a better grasp of the math behind it than I do. Though I don’t know what he meant by Genetic Algorithms not having changed anything. I don’t ever understand what people’s standards for change is. Change is a constant, and yet we seem to believe implicitly ‘nothing ever changes’, when the reality is, ‘everything always changes’. So I am not exactly sure where he’s going with that. Kurzweil makes a point in chapter two about the nature of order, and says that something cannot be considered, ‘ordered’, unless there is some kind of context, specifically, that it leads toward a goal. Order is only relevant in terms of the goal set before it. So tools are what they are, and they can do what they can do.

Still, a true human brain simulation operating with either faster rule-sets in a virtual environment, or even just electrical, instead of electro-chemical white matter would be faster, at least in the PNS, not sure if you could do electrical in the brain the conversion from electro-chemical to electrical might take away any benefit you receive, but if you do the PNS electrically, then you shave one certain sorts of reaction times. It depends on your goal ultimately. If you remove certain unecessary functions from an AI, you can improve its efficiency, if you have the full-on brain simulation and merely accelerate it’s thought processes because you have the computing power to do it, then it’s smarter than a human being. Same basic structure, total recall, and faster computation = smarter brain.

Here’s the problem: how do you get that human brain simulation?

Approach #1 - Exact Copy of Brain
Somehow scan a brain and determine all of the details.
Location and type of every neuron.
Every connection between neurons.
Location and type of glial cells (10x as many as neurons).
Quantity and location of all of the different neurotransmitters (60+).
Quantity and location of all of the other chemicals.
This is a ridiculously difficult task.

Approach #2 - Copy of neurons and connections only
Probably not even worth discussing, the probability that all of the other cells and chemicals do not contribute to function is probably around zero.

Approach #3 - Duplicate brain function without duplicating brain
This we’ve been trying to do for at least 50 years. It won’t magically happen just because our computers get more powerful, it will only happen after lots and lots and lots of effort by many people.

This is not actually true. Ray Kurzweil has gotten rich because he sold a few good ideas. He has made countless wrong and often ridiculous predictions about the future. These predictions very well might have cost him, but certainly less than he has made in his other somewhat interesting but hardly earth-shattering ventures.

The guy is a cryonics and anti-aging crank, for starters. His considerable skills in a few areas appear to be quite compartmentalized.

I hate to be a downer because I’d really like Kurzweil to be right, but I don’t think he is.

In a number of areas, progress has not only slowed dramatically, but stopped. And in some cases, even reversed.

Take human spaceflight. NASA is about to lose the ability for manned spaceflight for the first time since the 1960’s. Its new rocket may have serious problems. Moreover, NASA’s human capital is aging rapidly, and its funding is in serious risk. The U.S. may find itself with a hollow shell of a space program very soon.

In theoretical physics, there has been very little fundamental progress in the past 20-30 years. Hopefully the Large Hadron Collider will help us past the bottleneck, but there’s no guarantee of that.

The fastest airplanes ever built are now about 50 years old. We’ve lost our ability to transport large quantities of people supersonically.

The rise in entitlement spending and the aging of the population threatens to turn our gaze inward and focus on comfort and health care rather than exploration and innovation. I’m afraid that the pioneering spirit and optimism of the past is turning into the risk avoidance and demands for comfort and protection of the future.

Average computing speeds, as measured by what people actually buy, are not going up. The biggest growth today is in netbooks and smart phones - which some people are substituting for a home computer. More and more people are giving up power for convenience and comfort. For example, the move to cloud computing and browser-based applications is pushing application performance back by a decade or more. We’ve got incredibly fast hard drives and SSDs, but people are choosing to move their data around on slow internet pipes. It’s more convenient.

Audio quality is regressing. After huge improvements from 72rpm to 331/3 LPs, from 8-track to cassette and then finally to CD, audio improved constantly when I was younger. But the new hi-res audio formats, SACD and DVD-A, both flopped in the marketplace. And people are very rapidly moving away from even CD quality and back to to low-quality MP3s or internet radio, because it’s convenient.

And while TV quality is still improving, more and more people are bailing on TV in favor or low-quality online video. After an initial rush to improve digital cameras with better lenses and more megapixels, the big action now is in youtube quality pocket cameras and the lousy cameras built into phones. I’ll bet the average resolution of digital pictures has actually declined over the past couple of years.

Finally, the rise in technological power may be self-limiting, as it may increase our ability to destroy each other faster than it it creates an ability to improve our lives. If that becomes a widespread perception, we may enter a new era of Ludditism.

Maybe it’s the long day I had today, but I’m not feeling particularly optimistic about the future at the moment.

It’s Kurzweil’s own argument when he includes the evolutionary path we’ve taken up to this point into his ‘waiting for the singularity’ graph. Point being, even though other lineages followed the graph just the same way ours did, they did not continue to do so, meaning that following the graph does not imply continuing to follow it, which is pretty much the lynchpin of Kurzweil’s argumentation.

The word ‘fail’ was exclusively directed at Kurzweil’s argumentation. I said nothing about a species being ‘more’ or ‘less’ evolved; in fact, my argument was essentially that other species, having followed much the same evolutionary trajectory as ours, nevertheless didn’t manifest a propensity for cultural/technological evolution, as it ought to have been the case if Kurzweil’s argumentation were sound.

Not the point. My argument is that since there exists a limit beyond which it’s impossible to follow the curve, it’s obviously not the case that having followed the curve to a certain point implies to continue to follow the curve.

True, but that runs counter to Kurzweil’s argumentation (unless you wish to assert that evolution wisely provided us with a food source, as we’re the chosen species to carry on along the curve into the future) – the graph I’ve linked to above could be identically drawn up for, to pick an example, chimps, for the first two billion years or so. Thus, if Kurzweil’s argumentation holds some generality, one should expect to be able to apply it to chimps (or rather, their ancestors) some five million years ago, and predict that they should continue down the same road humans have. Only, of course, they didn’t, and neither did any other species besides ourselves; so his argumentation fails in the overwhelming majority of cases. Hell, even among our own species, there’s some that got ‘left behind the curve’ on a cultural stone age level, that never made it past rudimentary agriculture, and are still waiting for their ‘next event’.

So, that progress in the past has followed an exponential growth does not imply that it will do so in the future, as is shown by countless examples where that prediction already failed, and by it necessarily failing when certain hard limits are hit. There’s thus no way to reliably infer our future development from observation of the past in this manner.
Another thing I missed earlier is his Aubrey de Grey-esque assertion that ‘we will be adding more than a year [to our natural lifespan] every year within ten years’. This is, again, based on extrapolating the past exponential growth of the average lifespan into the future (from a grand total of five data points, but nevermind that). However, he completely misses the fact that our average lifespan has grown simply by virtue of more people achieving it now than did in the past – in other words, by cutting down on premature mortality. As a whole, we do not live longer, it’s (to a large degree) just that less people die early; maximum life span has remained nearly the same for most of recorded history, though I understand there has been some degree of mortality reduction at high ages, too.

Bolding mine.

Raft, your whole post on AI was really good. But, its not even WILL in my opinion, its MIGHT.

There is no guarantee we will ever figure it out. Certainly not one that it will happen “soon”. Thats right up there with predicting when something not thought of before will be invented…
Another point. Lets say we have a simulation of a brain (or something we think is capable of AI), either in software or some sorta fancy hardware/wetware or whatever. We pretty much have it down but we have to tweek a setting or two. With some fiddling you might get it to work.

But what if there are a bunch of settings to tweek ? If there are enough of them and enough uncertainty of what each one should be, there could EASILY be enough combinations out there that turning the whole mass of the Earth into a gazillion supercomputers and having a billion years to fiddle with the settings wouldnt be enough to figure out the right combination.

Also, what if you have your human level computer brain, and its workable. Its not like you can turn it on and immediately tell it works. What if still behaves like a human brain and requires years of input, stimulation from the outside world, other brains to play with, the ability to manipulate its environment, and the need to be taught by real humans ? That would make any progress in that field extremely slow.

I think the guy has it about a third right. It appears from back of the envelope calculations that reasonable progress in computing abilities will just maybe get us the **bare minimum **required to possibly create some sorta humanish AI in the not to distant future.

How much MORE than that we might actually need we don’t know. If its alot more it might require computer technology breakthroughs that might not come for a long time, if ever for that matter. Or at the very least kill the “soon” part.

And again, the final third, we might not ever figure out exactly how to do it. And even if we did, it might happen much further down the road than “soon”.

Sam Stone But the space race isn’t relevant to the argument. How consumers use netbooks isn’t relevant to the argument, at least not outside how budgets affect R&D. Technological progress hasn’t slowed. You’re not rebutting the argument, you’re simply dismissing it, even though your problem has been addressed by the S-Curves. We are on a plateau with space tech. We aren’t losing the ability to send a man into space, we’re just deciding not to and NASA isn’t the only game in town. Based on his tables, we’re at a point where corporate space tech is proceeding and being built, it just hasn’t pass the elbow of the curve and achieved explosive growth. His theory is a theory of the advancement of technology as a whole, looking at one individual part in order to find some example that doesn’t fit misses the point entirely. He’s talking about a standard rate or progress that occurs, and he’s also talking about the pinnacle of that progress, not the rate of adoption by the masses. He recognizes the limits of Moore’s Law and talks about extending chips in three dimensions instead of merely two, which is essentially what multiple cores do. So the hard limits presented by Moore’s Law are not essential to the advancement of computing power, and netbooks still present an advancement of the computer as people work to build a better netbook. Sometime in the next thirty years we might have China send someone to space, or Virgin Galactic will build a hotel, or whatever. Maybe it will take 50 years, but that’s all beside the point. You’re looking at the particulars at the expense of the whole, and missing the point entirely. In the book it’s actually well demonstrated with graphs tracking the progress of dozens of technologies, from number of transistors on a chip, to the exponential growth in MIPS, to nodes on the internet, to ability to calculate more of the genome, to the saturation of the telephone since the 19th century, on to the adoption of cell phones. You’re simply missing the big picture with the tired ‘but the space race’ or ‘but the flying car’ straw men.

Even if Kurzweil is wrong on the particulars, his theory over all bears looking into.

Except that the growth follows the whole of life’s evolution on Earth, and you’re judging it by individual aspects. So you’re judging the theory by criteria that the theory doesn’t propose. It doesn’t propose that any particular organism or technology will evolve indefinitely, in fact he says the opposite. This simply isn’t even a counter-argument to his thesis.

But that’s irrelevant. It’s simply not important, and it’s not applicable to this discussion.

But his argument is specifically about paradigmatic changing technologies, such as the PC, the Cell Phone and the Internet. It’s not about individual techs, so you are presenting us with a non-sequitur.

It doesn’t run counter to Kurzweil’s argumentation at all. It runs counter to the mistake you are making about his argument. His argument isn’t about individual species or individual technologies. This is simply a red herring that is not even a rebuttal. You’re arguing past Kurzweil, not against him as his argument is not about individual species.

Well except that certain advancements CAN be seen. We aren’t talking about a blank slate here, we are talking about advances in technology where we can see the path that we can follow to improve on technologies and have a reasonable expectation based on the theories that underpin those technologies, that they will work. Everyone keeps talking about the limits of Moore’s Law, in and of itself a prediction of the future. Why are they comfortable making this prediction? Well, because they understand the limitations of basic physics. The same is true for predicting advances to technology. There is no reasonable reason to expect that after decoding the human genome, the HIV genome or the SARS genome that we won’t be able to decode the genomes of every other creature on Earth. So we can make a reasonable prediction that this vast body of knowledge that exists and has yet to be adequately tapped, is available for tapping.

Well I am not going to speculate on Aubrey de Grey’s work. That’s kind of fruitless. It’s not a valuable example as we don’t know whether or not he’ll be successful and as such cannot use him as a counter-example to Kurzweil’s argument.

billfish678 and RaftPeople The problem with your view on this is that computer processors are ALREADY faster than the human mind. So at this point it’s a matter of architecture, and the mapping of the human mind is a field that is moving at a breakneck pace IN THE PRESENT. So no, we aren’t really talking about predicting a technology that hasn’t been invented yet, we are talking about the convergence of technologies that already do exist.

So are cars and rockets. And cranes and boats are bigger and stronger. And airplanes can fly, which we cant do. And those submarines? Let me tells ya !

But I don’t think they are getting sentient anytime soon either.

IMO, until we have a firm understanding of how the human brain works, its just a bunch of hand waving and wishful thinking that is only moderately more reasonable than ultra cheap vacations to Mars or Warp drives.

I think you’re misunderstanding my argument. Just think back to when chimp and human lineages diverged, however you may want to define that point. Back then, a proto-chimp Kurzweil could have made essentially the same argument human Kurzweil is making right now, and be completely wrong. This much is simple. So, now you’re asserting that the overall trend holds as long as somebody pulls on through. However, here the existence of hard limits comes into play: it’s not the case that somebody always pulls through, as there are points beyond which progress is impossible. These two points combined mean that since it’s not the case that the exponential growth holds at all possible points, it’s fallacious to expect it to hold at any specific point, and impossible to predict whether or not it will hold in the future.

And besides, how would nature/evolution/whatever you want to call the driving process ensure that there always is somebody to continue the exponential growth? If it petered out for the chimps, it could have just as easily petered out for us, as well, and it still can, unless you want to argue that the absence of a suitable population that follows the exponential creates a sort of ‘vacuum’ which somewhat teleologically entices some population to fill in the void, which seems highly suspect to me – besides, in remote enough areas, such voids ought to have existed effectively over the curse of history (think remote islands, continents before man’s arrival, or heck, the bottom of the ocean), without any local species apparently feeling compelled to take up the vanguard and continue the exponential.

I’m not sure I understand what you’re getting at here. What I’ve presented is a limit to information processing in general, and if Kurzweil is to be believed, at some point in the not too far future, that’s pretty much what the world’s gonna be made of (indeed, one could make an argument that ‘information processing’ is all that’s going on right now).

Well, it is about our species, and formulated from a deeply anthropocentric perspective – from the point of view of a cockroach, none of our ‘revolutions’ matter all that much after all; and indeed, from its viewpoint, we’re an evolutionary newcomer who’s yet to earn his spurs, and if the fate of our siblings in the Homo genus is any indication (they’re all gone), our particular survival strategy may not be all the hotness it seems to us.

I’m not disputing that technology advances, or even that such advancements as will be made during our lifetime probably would seem near miraculous if presented to us now; but that’s a trivial prediction. What I am disputing is the validity of extrapolating from the past to the future and expect to come to reliable, meaningful conclusions, especially as optimistic ones as Kurzweil’s. No exponential lasts forever, there’s sooner or later always a counteraction leading to a dampening.

Now, it might be that he’s right still, and if he is, I’ll be the first to plug myself into the hive-mind mainframe or transform myself into an ever-expanding consciousness composed of nano-scale von Neumann machines, but that doesn’t mean his conclusions aren’t premature.

I didn’t say anything about de Grey’s work other than that it is even exceeded in optimism by Kurzweil’s predictions. I responded to claims made by Kurzweil in the text you linked to in your OP (in the very last paragraph), where he effectively promises us immortality ten years from now. Wait, scratch that – the essay appears to be from 2001, so I guess we should be almost there by now.

No, they aren’t even close.

Who told you that?

Computer architecture and mapping of the human mind are 2 completely different things.

Computer Architecture:
Our current computer architecture is a poor fit for simulating the brain. But, just because it’s a poor fit doesn’t mean it won’t work, it just takes more processing power to get around the inefficiencies. Some researchers have built FPGA’s to build something that more naturally operates the way our brain does, but that’s been a limited effort so far.

Mapping the Human Mind:
Are you referring to mapping the physical location and makeup of the various cells and chemicals?
Or are you referring to mapping the functions of the brain and how it operates on a logical level?
“breakneck pace”:
Researchers are making regular progress regarding the physical activity that happens under certain conditions. But that doesn’t say anything about whether we are .0001% of the way there or 90%. Regular progress doesn’t help you conclude that we are “close”.

But how does this help us build a simulation of a working brain? (see my previous post)

Are you referring to simulation approach #1 from my previous post? If so, what technology exists that will allow us to identify the location of each and every important molecule in the brain?

If you are referring to approach #3 then maybe you can tell us which technology will allow us to duplicate the brain function without duplicating the brain?

My neurology class where I learned the difference between electro-chemical conduction and electrical conduction and learned about the conduction throughput of neurons.

The brain doesn’t actually process quickly, it’s the parallel processing that makes the brain the powerhouse that it is.

I have to leave, I’ll try and get to the rest of your post. I want you to know that you have had some of the most interesting responses, and I’m not blowing you off but I have a rental reservation and need to get to Grandma’s to gorge on Turkey. :wink:

The fields Kurzweil feels will create the singularity are IT, robotics, biotechnology and nanotechnology.

The aging population is actually a massive market for advances in robotics, biotech and nanotech.

There is a huge market for robotics due to higher productivity (a bipedal robot that costs $5000 and can do the work on a human laborer would be a massive productivity boost, and the first company that gets them would have a huge advantage over competitors).

Also as more and more people globally have higher standards of living, advances in biotechnology and nanotechnology to address chronic diseases (cancer, mental illness, CVD, osteoarthritis) will go up.

I don’t know if IT will grow the way Kurzweil predicts, because there doesn’t seem to be a consumer driven market for exponential growth in IT. I personally have no use for 500GB MP3 players, 8 core processors, 20Mbps broadband or 12MP cameras. So I’m not buying any. And I think most consumers feel the same way. So investment in exponential growth in these areas may hit a wall unless new applications that demand and require that level of power come up. However right now the only application people can come up with is ‘higher resolution media’ which doesn’t appeal to me. People tried to talk me into Blu-ray because of higher resolution media. a 12MP camera has higher resolution images (when you blow them up to 20x30). A 20Mbps internet connection can give me HDTV streaming. However a slightly better picture is not appealing enough to invest the money. Especially when you consider things like the fact that I do not blow up photos to 20x30, or that an upconverted DVD is almost as good as a blu-ray, or that a 720p image is not much different than a 480p image from what I’ve seen, at least not enough to justify it.

So unless the IT community can come up with applications other than ‘higher resolution media’ as a justification for the exponential growth in processors, memory, resolution, etc I don’t see people buying it. Like I said earlier, the PS3 has far more processing power than the Wii, but the Wii is more fun. So people bought that instead.

However there is a huge market for nanotechnology, robotics and biotechnology advances.

As we enter an age where more and more people are 60+ we are going to see a labor shortage of able bodied workers and a bigger chunk of GDP going to the elderly. So biotech that can cure, prevent and reverse chronic disease will become very important as a way to try to keep health care costs manageable. People will still live longer though, so it might just even out. Robotics to provide elder care will become more important. Nanotech to improve GDP, productivity and health in the elderly will become important.

Right now Japan is investing heavily in robotics because they know in 20 years the elderly population will make up a huge % of the nation (maybe 1/3 or more), and having all the young able bodied people employed in eldercare will destroy the economy. So they want robots to do eldercare so the youth can work in other fields.

So my point is that right now and in the future there is a huge consumer driven market for exponential advances in robotics, nanotechnology and biotechnology. However I don’t know if there is a market for exponential advances in IT, unless new applications other than ‘higher resolution’ come up.

It’s true, individual components of the brain do not process very quickly. But the aggregate computing power of 1 human brain is still faster than the fastest supercomputer.

Even when the supercomputers reach a similar level to our current estimates of brain processing power (which are pretty much just wags at this point), you still have to factor in whether the supercomputer can efficiently simulate the human brain. If 90% of that processing power is wasted trying to move data from 1 cpu to another so that the other can complete it’s computation (there will definitely be waste in this area) then you need an even bigger supercomputer to make up for those inefficiencies.

In addition, simulating the flow of chemicals and electrical fields in the brain will take a lot of processing power, something the brain just does. These types of things need to be considered when determining whether a computer is capable of simulating a brain as fast or faster than the brain.

Possibly a more efficient method is to not try to simulate the brain but rather create AI using our own methods. Unfortunately this is no simple task either.

Good point.

I’d go so far as to say that if you had to do such things at a fairly high fidelity for your simulation to function properly, you could pretty much kiss away the idea of simulating a human brain for all practical purposes.

I agree. I think a more likely scenario is we slog through the process of developing various functions in whatever manner we come up with and continue building on that, eventually we may (as you say) get there. I like to think we will, but there are certainly a lot of unknowns.