Ray Kurzweil's 'Law of Accelerating Returns'

Most of the brain is devoted to maintaining biological survival rather than cognition, and the neocortex is the region we need to need to understand to have strong AI (obviously I’m not a neuroscientist, but that is my understanding).

According to Kurzweil we have working computer models of dozens of brain regions. I do not remember them all, but he said we also had a working model of the auditory brain process which was being used by the military to detect where sniper fire was coming from.

http://www.technologyreview.com/Biotech/19767/

A working model of a rat neocortical column (with 10,000 neurons) was done a few years ago, and real life models reflected the computer model. They said they picked that brain region because it is more complex, and they figure if they can figure that one out the other regions would be possible too.

That part also bothered me, they were confusing life expectancy with life span. I believe natural life span is about 70 years (aka the age you could expect to live if you did not die from infectious disease, childbirth or trauma). Now with endless trillions in healthcare, advanced nutrition and 6 generations of medicine it has gone up to about 82 years. Our healthspan and life expectancy have increased dramatically, but lifespan has not.

A woman born in 1500 who avoided severe malnutrition, death by trauma and infectious disease would probably live to be 70 or so. The founding fathers mostly seemed to have lived to be around 70 years old. Modern american men live to be 80. Not a huge jump for 200 years of progress.

Yes, but if the biological/congition aspects still influence each other in any sort of feedback loop, you still have to SIMULATE the biological survival process to some degree even if it isnt technically required for your artificial congition part of the simulation.

If there is one thing we can say about our expanding knowledge of the brain, it’s that we keep finding out that it’s more complex than previously thought. This is that same mistake.

For example: Previously glial cells were not considered to be part of cognition. Recent research has shown that they are. There are 10x more glial cells than neurons.

Maybe. Maybe not. What part of the brain is responsible for our emotions? Our emotions provide the drive to achieve our goals (survival), can you have intelligence without emotions/goals? Not sure, but I wouldn’t discount any portion of the brain too quickly.

What do you think it means to be a “working model”? In this case, I’m guessing you think it means something more than what the research really accomplished.

When they do have a model that works just like the real thing (with neurotransmitters, etc.), then the next questions are:

  1. What have we learned? Do we understand how this thing produces thought?
  2. Can we scale it up to human level?
    Don’t get me wrong, I enjoy reading about the research and progress, I just think Kurzweil claims for us being “close” are not justified.

People claiming that AI would be achieved “in ten years” have been saying so since the 1950s. They have shown numbers and curves and estimated and predicted and, to date, have been wrong. They have not been stupid people, either, they’ve been very talented. And wrong. I won’t go so far as to say that Kurzweil is wrong, too, but I’ll believe it when I see a box pass the Turing Test.

IMO its even worse.

Controlled fusion has been 20 to 50 years away for decades, but its mostly just an engineering/scale problem. AI appears to be going backwards in some respects by comparision.

I suspect that if I really understood this whole AI thing in great detail that I would be saying something like “every year, we make great strides, but every year we learn how much further we have to go than we thought we originally did and how much more we still need to learn that last year we did not know we even needed to learn”.

Here’s an interesting article that shows how important the chemicals in the brain are (can’t just look at neurons and synapses).

“A report in the November 25th issue of the journal Cell, a Cell Press publication, shows in studies of mice that the rise in acid levels in the brain upon breathing carbon dioxide triggers acid-sensing channels that evoke fear behavior.”

http://machineslikeus.com/news/brains-fear-center-equipped-built-suffocation-sensor

Note: the website machineslikeus.com has a number of new articles from research every day. It’s a pretty good site for finding this type of stuff.

Working model to me means that when you intentionally damage an animal’s column vs the computer model in the same fashion, you get the same results and consequences in behavior.

You guys suck. Before this thread I was a diehard singularian fully expecting to be a cyborg by the 2030s who had more advanced cognitive abilities and emotional experiences. Now I have my doubts.

Lol Wesley Clark read the book again. Most of this threads critiques are actually made irrelevant by tech advances at the time of publishing. He also talks about how him being egregiously wrong means wrong by 20 years. His overall theory of the synergistic exponential advancement of technology is still sound even if some particulars of his prognostication are off.

Most of the criticisms here are still linear extrapolations. Read chapter 3 again it’ll remind you of how we can predict future advancement by looking at proven theories that haven’t been industrialized yet like spintronics, carbon nanofiber semiconduction, SIMD DNA and Photonic computing.

RaftPeople. You are mistaking more powerful for faster. Semiconductors perform one calculation faster than neurons, but the entire structure performs more cps than a supercomputer. Your point about brain acid in respiration inducing a limbic fear response is sort of valid and sort of not. He goes into emotional aspects of cognition. But a fear response that induces exhalation is not necessary for a machine that does not breathe. So we do not need to precisely model everything in toto. We can tactically eliminate orders of comlexity by removing such processes that are understood. The machine should model emotions but emotions relevant to it’s own material condition.

His particular predictions may be wrong but that doesn’t make the overall theory useless, because at a certain point in technological advancement we do cross a threshold to a higher state. This happened over and over in the twentieth century. People talk about the plateau of the space race or car advancement but those things were limited by the inability to perceive interim technologies. Kurzweil is probably subject to the same problem but I think ultimately he is correct about a fundamental reordering of the human condition I’d say has already occured and is playing out.

Can you elaborate?

Nobody denies an increasing rate of advancement. Heck, that’s pretty obvious to most people, no book required.

The specific problem is his statement that the singularity is close. Most people (especially those that are actually in the field) don’t think that is a reasonable statement to make.

I don’t see criticisms based on linear extrapolations. I see criticisms regarding a prediction about AI that at it’s foundation is based on something unrelated to AI (computing power).

If Kurzweil had some approximation of all of the different things we will need to solve to achieve AI, and how many we have solved so far, then we would have something concrete to work with, but nobody knows what exactly we need to do to achieve AI. It’s a giant unknown.

You are talking to a person that knows full well the speed of both computer processors and the firing rate of neurons in the brain. I’m not making a mistake, you posted something that simply is not correct.

This is what you said:
“The problem with your view on this is that computer processors are ALREADY faster than the human mind.”

This is what I said:
“No, they are not.”
Regardless of any low level comparisons, the bottom line is that the most powerful supercomputer on the planet is less powerful than 1 human brain.

The point I was making was that, if you are going to try to simulate a brain, you have to account for the chemicals in the brain because they are a key part of the function. We already know this regarding neuro-transmitters, but this was an additional interesting fact that breathing co2 caused neurons with acid sensing ion channels to fire.

This assumes you are taking approach #1 (full brain copy). If you are taking approach #3 (alternate method of intelligence) then this is a non-problem, of course there are a few other little hurdles with approach #3.

We don’t know what can be removed from the model. We have zero clue. Maybe someday we will know, but it’s pointless to talk about it now when we have no idea.

Again, the point is that he is not justified in saying that the singularity is close, there is no information that can be used to say we are “close” to achieving AI.

Regarding Models of Brain:
Did you know that information is transmitted in our brains through gamma waves? And that different types of information are transmitted at different frequencies? And that neurons synchronize themselves to the different frequencies, switching back and forth multiple times per second?

Does the rat cortical column do anything like that?

Again, the progress they are making is great, but you need to fully understand what they have truly accomplished. When they say they made a model, they don’t mean that it’s a functioning portion of a brain. They mean they are modeling a subset of the things the cortical column does.

Hey, the Scientologists are still taking members…yeah I know… buthey, its still something :slight_smile:

Good. You can respond to my critique of AI, then.
But you are right about Moore’s law extending past the time we run out of ways of shrinking IC geometries. I’ve seen something projecting it into the past, and it seems to work decades or even centuries before the IC was even invented. No doubt we will jump on a new S-curve and keep going. But that is far from the same thing as predicting a singularity. The point, again, is that computing power does not equal intelligence, and the AI people are no where near actual AI.

If the 50% number of granted patents is correct, it must mean that a bunch of people are trying to patent water and fire. I have patents from before and after this policy, and I managed plenty of applications, and can say that the Patent Office isn’t even trying any more.
And in support of a lot of patents aren’t earth shattering point, I have one on what is basically an Exclusive Or gate. :slight_smile:

GAs are one way of doing state space searching, and are similar to techniques like simulated annealing. In my particular use of this, there have been plenty of papers on GAs (I’ve accepted some for journals) none have panned out to be better than more traditional algorithmic techniques, which also have some heuristics baked in. In all the cases I know of, GAs may be useful when you don’t really understand how to solve the problem, but just how to recognize a reasonably good solution, but once you do understand, they lose out to specific solutions.

A criticism Kurzweil makes of that is that these things follow exponential trends.

http://blogs.current.com/maxandjason/2009/05/29/ray-kurzweil-the-future-of-clean-energy/

He cites as an example the work of the Human Genome Project. In 1990 scientists had managed to transcribe only one ten-thousandth of the genome over an entire year. Yet their goal was to sequence the entire genome in 15 years. After seven years, only 1 percent had been sequenced. But, in fact, the project was on track. The rate of progress was doubling every year, which meant that when researchers finished 1 percent they were only seven steps away from reaching 100 percent. Indeed, the project was completed in 2003. “People thought it would take centuries,” Kurzweil says, because they foolishly believed that technology could advance only in a linear fashion.

We are now at the point where a genome can be sequenced in a day for about $5000-20000, up from the point where it took billions of dollars and 7 years to do 1% of a genome.

http://www.technologyreview.com/biomedicine/23891/
So the argument of people like Kurzweil is that brain scanning to comprehend neurology will follow the same trends.

I agree with your skepticism about Kurzweil in general, but have to nitpick this. Computing speed is going up. Clock speed is stagnant, as the result of heat and power issues coming from leakage at nanometer process nodes. But bigger caches and multiple cores produce more total power when applied to the right applications.
What people buy is dependent on what they need. Very few consumers require that their applications run faster, just that more of the hackwork is taken care of for them, and their pictures are prettier. At work, I use a thin client with significantly less computing power than my laptop. But if I want to do anything interesting, I have a 10,000 processor compute ranch at my disposal, and can easily rlogin to much faster machines for applications where I can’t parallelizr things easily. How much compute power do I have?

And the slow pipes won’t matter all that much when the data and computing power are all in the cloud, and the pipe is used only for commands and viewing results.

And, more on topic, our 10,000 fast cpus haven’t managed to become aware yet, despite representing a lot more computing power than existed on the planet when I took AI the first time.

I’ll just point out that decoding the genome and understanding the genome are two very different things. Hell, I can get a number of different representations of the design of one of our processors in minutes, but that is a far cry from understanding it working up from even the gate level.

I know all that. I know that faster computers are available. My point is that consumers are trending towards slower but more convenient machines. If that keeps up, there is going to be less financial pressure to continue developing faster computers (at least at the consumer end), and R&D will be hit and progress will slow somewhat.

This is the point I’m making. Not long ago, processor speed and graphics power were everything, because they were still bottlenecks in the way of consumers getting what they want. They now appear to be ‘good enough’, and people aren’t spending money on faster machines. That’s going to impact the pattern of development in the computer industry. Investment will start to flow away from research into more speed, and into other things, such as smartphones. In fact, this has already happened.

You have a lot. But the market for people with needs like yours is a lot smaller.

Then that will impact development of faster pipes, too. Instead, development money is moving more towards companies like Facebook and Google, who are not exactly advancing Moore’s law.

I agree. The biggest, fastest computers we have really only do the same things the smaller ones do, only bigger and faster. We don’t know how to make them ‘wake up’, or how to produce software that will scale in complexity with size and yet still be functional.

I’m reminded of the 18th and 19th century during the automaton crazes. People were building machines that were amazingly intricate. Animatronic birds and other animals that could be amazingly lifelike. Some people thought that they were very close to creating life then. But they were nothing more than spring movements and intricate systems of gears and cams.

I suspect you know this, but for everyone else speed has ceased to be an issue a long time ago. Depending on the market, throughput and results on various transaction benchmarks count for a lot more. And CPUs are not the only process drivers - DSPs and GPUs count also. Memories are too low margin to really drive anything any more, though they often are the first things through a new process. What is driving things more is that there is still a bit of competition, though mostly it is between Intel, IBM and TSMC, and that new process nodes can improve yields per wafer even if the speed is more or less constant.

Nothing new here. MIPS has stood for Meaningless Indication of Processor Speed for 20 years at least. When I was hanging around on comp.arch over ten years ago a common thread was about the crisis in CPU design when they were fast enough for anyone’s needs. Besides gaming, movies are about the only thing needing lots of processor power - besides Microsoft bloat, that is.
In any case new fab technologies and things like smartphones aren’t in direct competition for money. Smartphones use embedded processors and lots of analog and mixed signal stuff, which don’t benefit all that much from new process technology. The real problem is that new fabs are more and more expensive, and the number of companies being able to afford them has shrunk, especially with TSMC being any easy place to migrate to. More transistors per die means fewer ASIC codes and higher mask costs per ASIC, which means fewer wafer starts, which makes it harder to pump through enough leading edge wafers in a new fab to pay for it.

Right, which is where the cloud comes in. Many companies will need that kind of power one month out of the year - renting it is much cheaper.

Google is an excellent example. It needs lots of lots of relatively low compute intensive threads, and the new trend in processor design is exactly directed to this kind of market. Plus, it does lots of computation using big databases in a tightly coupled network, and we see only the results, which take up a tiny bit of bandwidth. They are riding Moore’s Law, not driving it, which is a result of their choice to build their compute ranch with lots of cheap processors. They don’t need CPUs with super single thread performance and a hot FPU.

This is one of my big beefs.

There is (or at least I highly suspect) a real physical limit to how fine/small the details you can scan the human brain at.

If you told me that you could do these things :

Scan the brain at the neuron or sub neuron scale.

Scan the whole thing in 3 dimensions (not just a 2 d slice here or there).

Scan the whole thing in 3D continously for long periods of time.

Also measure all sorts of parameters for EACH neuron, ganglion, connection, whatevers like voltage, current, resistance, bias, histeresis, presence/concentration of all sorts of compounds/substances and probably many other parameters to boot probably. And doing ALL THOSE measurements in 3 D continously for long periods of time.

Not only that, you gotta DO that for all your inputs as well.

If you could do all those things (and there is probably more need to know stuff that I’ve missed) then yeah you can probably eventually figure out how the human mind works.

You may NOT need that level to figure it out. But then again you MIGHT.

But honestly, I not sure that some of those things are physically fundamentally even doable. My WAG is they are doable in the sense that transporters and warp drives are doable. Not pure magic, but require so many advances in theory, ability, and engineering prowess that to talk about them in anything other than a “cool wouldnt that be neat” science sort way is just goofy.

And to predict that all this is both inevitable AND just about to happen is beyond beyond.