AI Intelligence, rate of growth of in a given generation of AI

Many articles postulating the different potential outcomes of current research into AI technology I’ve read are quite insistently modest when it comes to summarizing the current level of AI capabilities. It seems agreed amongst most that nothing like the actual, ideal concept of Artificial Intelligence yet exists. It seems further concurred that the extant predecessors of AI tech (fucking Siri, for example, or this kind of shit: http://www.manifestation.com/neurotoys/eliza.php3) fall far beneath our idealized conception of sentient Futurebots[tm].
But every optimistic piece I have read about the future of AI seems centered on one particular conceit: the idea of sudden, exponential, and quickly catastrophic increases in IQ from one generation to the next. I guess when I say ‘optimistic’, I mean from the point of view of the sentient machines.
Anyway, I know very little about the current trajectory of computer sciences, other than it seems to have increased at an alarming rate in my lifeltime alone. Are these cautionary tales of suddenly skyrocketing AI intelligence (and thereforpower) an at all probable future scenario? If so, how probable? LIke, gimme an exact number. What, why not?

We don’t know yet whether truly sentient AI is possible. So, how could we predict anything, let alone exact numbers?
We don’t even know that, if possible, they’d be profoundly more intelligent than us in abstract thought, let alone leading to a technological singularity making humankind their bitches.

This is my humble opinion:
Our brains solve tons of difficult problems and then bring them all together to solve an even tougher one (consciousness). There is a lot of function in our brains and it would be difficult to suddenly duplicate all of it.

I think AI will progress steadily by solving the underlying problems, just like our evolution did, and continue to build on that for a long time. Eventually they will have the raw materials to start working on consciousness.

I was kind of kidding about the exact math. But there are many who are experts in the field who seem to be of the firm belief that this kind of AI is not only possible but inevitable. I was hoping to find some folks on here who know more about this than I do. But thanks for you input. The notion of AI exceeding us in its capabilities is usually part of the narrative in the articles I’m referring to. Focusing on the possibility of them surpassing us in abstract thought is particularly tantalizing.
Mind you, I am not asking for the odds on a doomsday scenario involving AI. I believe all doomsday scenarios are on the table and it’s just a crapshoot as to which one we get; and I am not interested in debating the likelihood of one versus another. I just want opinions/research/conjecture on how likely it is that computer intelligence will one day start exploding at what is generally agreed to be a dramatic rate.

According to the believers it has been about fifty years away for, oohh, about sixty years now.

Raftpeople-Usually in the exploding-intelligence-gains AI narrative, it is implied that the AI begins to perfect its own awareness early on, facilitating the ‘explosive’ rate of advancement. You seem (maybe) be of the opinion that the process will not proceed in such an escalating fashion?

I think it’s possible we expect too much out of AI. Even among researchers, we sort of expect a genius machine that can talk to us out of the box. Sure, maybe we’ll give it a few weeks to train, but is anybody really going to give an AI the training you give a baby?

I mean, let’s take Siri. Siri is probably the best candidate for “with you about as much as a baby would be”. But if it was exactly as smart as a person, you’d have to wait a few years after buying your phone before it could say its first coherent sentence. And if you need help with your homework, well, I hope you enrolled your cell phone in high school.

Even machines that you want to learn to play, say, a video game. While it may seem like a self contained problem, playing Starcraft and Chess rely on all sorts of subtle skills you learn since birth. Especially training, which involves interpersonal communication!

My point is, there’s a bit of unattainability in “true AI” because our interests don’t really align with that. Nobody wants to play a video game where the enemies have to spend 20 hours learning just to be competent. Why manufacture smart cars if you have to spend as much time training them as you do a hired limo driver? Why not just cut out the middle man at that point?

I don’t think sudden, rapid growth is unattainable. In fact, it very closely mirrors the way values converge in programs once it nears an optimal state. If you’ve ever worked with reinforcement learning you’ve seen computers do extremely baffling things over and over until suddenly they’re better than a person. But AI is composed of so many problems that for it to occur, I think it would take a lot more pure time and energy investment than anyone is really going to do. Certainly it would be hard to get a grant for any sort of research that goes “I’m going to raise this robot like a child for the next 18 years and get back to you…”

Well, that’s sort of the problem. It’s (usually) a cautionary narrative and can’t be based on facts since the technology is only hypothetical.

It’s like trying to predict when mankind will create self-replicating nanobots, and then turn into some “gray-goo” epidemic.

And does anyone believe that Extended Mind Theory (your mind extends outside of you; your laptop or smart phone becomes part of it at certain times; analogy of man with alzheimer’s with a notebook; all of this is inelegantly expressed so read about it for yourself: The Extended Mind} has some bearing on the issue? I mean, if we redefine the boundaries of the brain to include its periodic union for various electronic devices, how different is that from AI?

I want to be clear that researchers don’t expect things to “just happen”. They put tons of work and effort into the creation of these things. But there’s certainly a bit of a conceit that after 6 years of research your bot shouldn’t still have 5 years of bootstrap time after it’s turned on before it’s officially not a moron.

Thank you for your comment, a lot of fascinating points. Since I am not a practicing scientist (read: I don’t get to beg for grants), I did not think of the funding problem immediately. Of COURSE no one wants to fund 18 years of raising Noam ComputerChipowski, depsite how amazingly instructive this would be. Jeezus, we are so short-sited. I am very curious: what kinds of experiments ARE getting the most funding in AI research right now?

True, but those beliefs tend not to be based on anything concrete but more on faith. This post by a Popular Science writer summarizes that notion pretty well.

Also-and maybe I should start a different thread for this-this article presumes that we would all agree that the most important singular difference between any of the three hypothetical cognition scenarios it describes is between the second and third. Based on this straw man, it goes on to make the case that the two are in fact essentially the same, proving that all three are, as well. But in my reading, it is example number one that is fundamentally different, because in two and three, the cognitive demands are being shared across a technological system in a way they are not in the first.

Of course AI development is characterized by exponential growth. All technology is. We even have a pretty good idea of the growth rate. The difficulty is that we don’t know where the finish line is. We can do a pretty good job predicting how much processing power, memory, etc. computers will have at any point in the reasonable future… but we don’t know how much of that is necessary for “true AI”.

Thank you! This is the sort of input I’m looking for. continues reading

Ray Kurzweil is the one promoting the idea of an AI “singularity” in about 15 years when artificial intelligence trumps human intelligence and it’s easy to find lots of material online about him and the idea of a technological singularity.

I would add that when people use the term “artificial intelligence” what they generally mean is “artificial human-like intelligence” and it’s actually more likely that computer based intelligence will achieve similar results in a different way.

For example, none of Google’s algorithms “understand” language in the way humans do, but when I use the speech input to Google Now on my phone, it’s still able to understand what I’m saying and deliver relevant results.

It’s most likely that “human-like” artificial intelligence will only be an interface to something that functions very differently from human brains.

I definitely do not think AI will begin to perfect it’s own awareness early on. Eventually maybe, but not early on.

It seems to me that unless you are a mind-body dualist, that you must concede that a general AI is at least possible in theory, since we are general intelligences and we exist. (If you believe in a soul, then you can believe that humans can’t build a machine that has one. I don’t believe this, but I throw it out as a possibility.)

That said, an AI would be a software implementation. It would probably need to have experiences like a person does in growing up and might take as long. I also see no reason to believe that the intelligence would be able to modify itself, since it probably wouldn’t have the ability to access that level of its functioning. After all, you can’t will your neurons to fire, or your endocrine system to produce certain hormones; what we perceive as consciousness runs at a totally different level as all that stuff. It is possible that an AI might be designed that was a better AI researcher than any human, and would make progressively better AIs, but I also think it’s possible that they wouldn’t be any better than people at that, or at anything else for that matter.

I can imagine some sort of strong AI that can tweak a copy of its code, and crunch iterations and forge its own evolution so much faster than humankind that it can continually bootstrap itself up at an exponential rate.

We know consciousness can run on meat that just sort of built itself (over billions of years of painfully slow mutations and natural selection) from the primordial soup—so why not a form of consciousness that runs on digital circuits, which emerges from some primordial code in a very similar way.

The thing about AI, is that while the ghost might be in the software, it would be able to tweak both its genotype (software) and phenotype (hardware) at speeds wholly alien and “supernatural” compared to biology.

That said, I personally believe mankind will probably achieve strong AI within the next 100 to 200 years. Then the AI itself (themselves?) will take it from there.

I think that google buying a bunch of robot companies is the game changer. I think what will take time is getting all those divisions working together.