The problem I have with the singularity

As I understand the basic idea of the singularity, is that once a greater than human intelligence AI is created, it will inevitably improve it’s own capabilities exponentially, and before very long, super intelligent AI’s will be in charge of the planet with humans relegated to pampered pets at best, or exterminated at worse.

Now, maybe I’m missing something here, but its the “inevitable exponential self improvement” bit that I have a problem with. Just why is it assumed that this AI will automatically have the knowledge and capabilities to reprogram and improve it self? The smartest humans that have ever existed have not had the understanding and knowledge of how to improve their own intelligence? Why is it assumed an AI will?

An AI with capabilities that we would call “sentient” is likely to be some kind of evolved system using Neural networks or genetic algorithms (or both), it grows organically and even the programmers don’t understand how it works, it just works after many generations of iteration and selection.

So let’s say that this AI spends it’s life studying itself and trying to improve it’s own performance. Even if it can, it’s going to run into physical constraints , it might be able to double it’s own processing speed, speeding up its “thinking” ability. Great, but then it hits a wall such as the speed of light, or the smallest a circuit can be before quantum tunnelling effects take over.

Lastly, to hugely improve itself, it needs the cooperation of an entire multi billion dollar industry to create the new circuity or optical computer or whatever else it runs on. Unless a huge amount of people and money are devoted to helping itself improve itself, its not going to happen.

In short, I just don’t see the singularity as realistic, a big deal, or a threat to humanity.

Have I missed something obvious?

Well, we can’t actually redesign our own brains/minds; an AI presumably could. It’s artificial and redesignable, and it knows how it was made and, for the most part, how it works. And it could experiment with a lot of prototypical redesigns to see what works, probably without working any irreversible change in itself while doing so.

There’s a lot of assumptions there. Lets say it wants to clone itself to experiment on a copy of itself. Well that might involve 10’s of millions of dollars of hardware to create an exact of itself, so it has to obtain that. Then the clone self might not WANT to be experimented on and might resist.

Singularity proponents seem to assume that the AI has unlimited physical resources to exponentially improve it self, that humans will actively help it do that and that physical restraints on hardware and software don’t exist.

You’re right that there are assumptions involved, and it is easy to imagine scenarios that don’t work out that way at all.

(The easiest is the collapse of civilization, such that AI is never discovered at all!)

If one presumes that human technological progress is unbounded, and that we will eventually have fusion power, quantum computing, nanotech industries, superconducting power transfer, etc., then it is not unreasonable to presume that an AI could have all of those plus the ability to modify itself to accelerate the process.

As an interim stage, humans will probably engage in self-modification also, enhancing our own intelligence, and thus accelerating our own progress. But then, that, too, leads to a “singularity.” The concept does not absolutely depend on AI.

I’m skeptical we’ll crack this nut anytime soon, but…

Human intelligence evolved on the order of millions of years, or even tens of millions if we count the chain of primate brain development. We can go back hundreds of millions if we count mammalian brain development.

If we have an evolutionary process that can do the same, from scratch – and if it can be developed within this millennium – it will have succeeded orders of magnitude more quickly. This is true even if we don’t understand how it worked. If we can develop a human-level intelligence in human time frames, rather than the geological time frames that it took to develop us, then the same evolutionary development process very quickly will be able to spit out intelligences that are much smarter than the smartest human. The human brain is not an arbitrary upper limit on how far we can go. We can keep pushing the limit. And we absolutely would. Again. And again. And again. Assuming the evolutionary process works – and that’s a big if – it will continue to work well past human intelligence. Very quickly, we will have a version that is much smarter than any human who ever existed. Maybe even a hundred times smarter. Or a thousand times. We might not understand how it works, and it might not understand itself how it works either. But it would be better equipped to understand how it worked than we are.

We don’t know the physical constraints. We don’t know how far we can take this.

We know that a human-sized brain can do what human-sized brains do.

Assuming it works, then we know it works. It will be able to do better. It will be smarter than us. If it reaches any limits of its original architecture which we created, it will be the being best positioned to develop a new architecture that can push to even greater limits.

Any mechanical intelligence that is smarter than the smartest humans on the planet will be the single most valuable thing on the planet. Even if it doesn’t understand itself, it will understand any human-level problem much better than we do. It will never forget information. Its skills will never atrophy. Its inferences will be be faster, and go deeper. Renting a few hours of its time will be hideously expensive, and there will be every incentive in the world to make copies of the best version currently available. This would be stupidly easy to do. Creating a new human intelligence takes 20 years, and the actual level of intelligence of a child is a genetic crapshoot. Copying the best version of the smartest mechanical intelligence on the planet into a new platform would take six months at the very outside, not 20 years.

And even as we copy the best versions available, we would still be pushing the limits of evolutionary adaptation.

Complexity arises from simplicity.

Human beings tend to look at a complex process and assume that the complexity is foundational. Fundamental. But that’s not how complexity works. Everything complicated in the world is built up from simpler principles. Imagine showing a visualization of the Mandelbrot set to someone from two centuries ago and asking them to imagine how it was created. If you showed them the actual mathematical process, they would be astounded by its simplicity. Hell, people are still astounded today. Complexity arises from simplicity, but no matter how many times we see that demonstrated, we still don’t quite believe it.

We look at human-level intelligence, and we’re perplexed in the same Mandelbrot sort of way. But there is some mathematical pattern to what we do. Of course, we don’t know if we have the tools to start the evolutionary process you talk about. Our current mechanical hardware might not be the right choice to figure out this pattern. All of this is still built on a very big if. But supposing we can create a mechanical intelligence that can be improved evolutionarily… eventually we would get a version that would figure itself out.

It would not figure itself out by studying itself directly. Too complex. Too much to interpret all at once. It would figure itself out by studying the evolutionary adaptations of its ancestors. Piece by piece. It would learn the patterns of the previous versions. It would never forget. It would never lose its skills. It would build a theory of general intelligence slowly and methodically by examining how the very first general intelligences were created.

Once it had figured itself out, then shit would really get crazy.

This is full of the usual caveats I’ve sprinkled throughout this post. Every discussion about AI is built on the first big if. This isn’t an exception. We know intelligence is possible because there are seven billion walking around. But we don’t know the engineering hurdles necessary to duplicate the biological feat. We might not get there soon, if ever. But if we can create one, then we can create more. And even if it doesn’t understand itself at first, it won’t take long to create one that will.

I think your summary just contains way too many unspoken if’s. “If” we can create intelligence at all, “if” it’s evolutionary, “if” it can learn to understand itself.

I think there’s a huge unspoken assumption missing: First “we” that is, ordinary humans of human intelligence, have to create an AI that is not only more intelligent than us, but intelligent enough to understand itself and know how to improve itself. Thats the crux of the bootstrapping, without it the whole thing fails to reach the exponential self improvement stage.

I do think we will eventually create AI’s of human or slightly above human intelligence. I just don’t think that will create a runaway singularity.

I dislike the word “singularity” as a description of this phenomenon, as it seems to borrow from physics to imply a sudden transition beyond a specific threshold. The transition is neither sudden nor is there a specific threshold of intelligence or functionality at which we lose control. Indeed, I would argue in many ways it’s already happening.

We haven’t? We’re the ones who created AI – which already has capabilities far superior to our own, albeit in very specific domains.

That’s the fallacy of “anything truly intelligent would have to look and function exactly like us”. It’s just not true. The insidious thing is that if advanced AI looks nothing at all like what we expect, we may not even recognize it for what it is.

That’s just nonsense. It assumes that technology evolves linearly without fundamental breakthroughs. It’s like predicting in the 1940s that computers will never be really powerful because such a computer would require so many vacuum tubes that they would start burning out as soon as you turned it on, and the heat would be unmanageable.

This is fundamentally misunderstanding the problem. We’re already doing it. It used to be said that we need not fear computers because, worst case, you can just pull the plug on the stupid thing. Can you? What would happen if your bank’s central computers went down and stayed down, and lost all their records? Your wealth, the wealth of corporations and the wealth of nations is now just bits in computer memories and storage devices. Our present civilization cannot function without computers and the networks that connect them – rather than reassuring ourselves that we can “pull the plug”, we build complex backup power systems and redundant data centers because it’s a matter of survival. Think about that the next time some customer service rep tells you “I’d like to do that for you, sir, but the computer won’t let me.” Try telling them to unplug it. The Internet, massive shared databases and data mining are already creating unforeseen privacy concerns and there’s not a lot that anyone can do about it. If you need a job, a loan, a personal bond or a security clearance, computers will not only be involved in the decision but may be the only decision makers. The emergence of stronger AI is not some mysterious “singularity”, it’s just a logical extension of what is already happening, though the rate at which it proceeds may become exponential.

I don’t accept this. The old quote “If the human mind were simple enough for us to understand, we would be so simple we couldn’t” seems apt.

The thing is that you are trying to build an intelligence that is intelligent enough to understand itself. That’s like trying to build a model of the universe that includes the model. Or more basically, it’s like trying to open a box with the key that is inside. It’s not immediately obvious that this is even possible.

Your idea that the machine could understand itself by looking at the evolution of its ancestors seems similarly dubious.

Humans are a lot more advanced than our ancestors. We have a very good handle on how intelligence increases from bacteria to jellyfish to worms to fish. But that knowledge has added surprisingly little to our understanding of our own intelligence. The reason for that is largely because, once we progress past fish, the intelligence becomes so similar to our own that we need to simplify it in order to process it. The human mind simply can not hold a working model of the mind of a cat.

This is like trying to run a virtual machine. I can happily run a virtual Windows95 machine on my windows 7 machine. My machine struggles to run a virtual WindowsXP machine but it can do it. But ask it to run a virtual Windows Vista machine and it simply can not do it. The closer the studied intelligence gets to the capacity of the studier, the less likely it is that the studier will be able to model it. And if it can’t model it, it can’t understand it.

Whether this intelligence ever forgets is largely irrelevant to this degree of self-awareness, because it’s not about simply storing data. It’s about being able to build a working model the interaction of that data. At some point the ability to model the interactions from a neural net or similar will take more processing power than the modellor is capable of. And that point is a long, long way below the functioning level of the modellor. So while a single human can understand the neural workings of a flatworm, we have no hope of ever being able to understand the workings of a cat. There is just too much information flowing around for our brain to be able to process it. Having a machine that never forgets won’t overcome that problem because, as you note, a lot of this intelligence is emergent and no amount of brute force replication will lead to understanding in the sense of allowing it to be mimicked…

And that emergence is all tied up in the problem. If intelligence is emergent, then self-awareness is almost certainly a function of normal awareness: empathy, perception and so forth. In order to reach level N of awareness, and entity needs to have level n-x of general awareness. An entity can not develop an accurate mental model of itself or another entity that approaches itself in complexity. As the level of awareness increases, it can assess more complex intelligences, but its own intelligence will also increase so it can no longer assess itself.

It is far from certain that any intelligence will ever be able to have more than a rudimentary understanding of itself.

Thank you Blake, I think you expressed some of the things I was trying to say better than I could of.

Jack Good stated one of the earliest versions of the Singularity; he called it the Intelligence Explosion. His idea was that an AI that was equivalent in ability to a human would nevertheless be much faster, because of the electronic nature of its mind. If humans are at all capable of creating an AI that is more capable than a human, then (according to Good) the human-equivalent AI would be able to do this too, but do it faster. Result- intelligence explosion.

One slight problem is that the creation of a human-equivalent AI which is faster than a human is already a step towards an intelligence explosion; the first human equivalent AIs will probably be much slower than humans, as every processing step will need to be painstakingly modelled on a non-biological device. To get to the stage of AIs that are faster than humans will take a long time, and we can’t rely on the machine’s help to get to that stage.

Perhaps the most important advance that an ultraintelligent machine can make is in the region of self-understanding. Humans may not be able to model their own internal mental state, but they can do it much better than a cat, and we might expect an ultraintelligent machine to be even better at it. Where humans are only vaguely self-aware, an ultraintelligent machine could improve its self-awareness to an arbitrary degree, assuming that such a trait were desirable.

Perhaps the result would be a species of self-obsessed navel-gazing machines, which have little value in the real world. But I doubt that such navel-gazers will be the only result of Good’s intelligence explosion.

That seems like a total non-sequitur. Why not assume that an AI that was equivalent in ability to a human would nevertheless be much slower because of the electronic nature of its mind? Or that an AI that was equivalent in ability to a human would be precisely as fast as a human, because of the electronic nature of its mind?

Electrons in electronics move fast, the action potential in neurons moves fast. Trying to extend that to ascertain the speed of a processor built on neurons or electronics seems like a massive Fallacy of Composition. Just because one component is fast doesn’t mean that the system must be fast. The system will be precisely as fast as the slowest component, whether that be the synapse in in a neuronal system or the flux capacitor in an electronic system.

No, you are not. That’s precisely the misunderstanding. The systems we are talking about here have two key properties. One is that they are emergent. A system like the Internet is not something that, in the aggregate, was explicitly “designed” in the sense that everyone knew exactly what it would look like in today’s form and everything that it would do, for better or worse, or that it was or is in any sense “understood”. It just is – and its effects on commerce, social networking, learning and information sharing, and its impacts on privacy, and our intrinsic dependence on it – those things were never planned, modeled, or predicted. They just came about as emergent properties of what we had built and then evolved in response to how it behaved and what it did for us. As has our entire computerized society, which now runs our lives whether we like it or not.

The second key property is that the AI systems we’re talking about are systems that will be able to incrementally improve themselves, through various processes like accumulating more information, adjusting their behaviors, and other classical aspects of what is generically called “learning”. This is completely different from the paradox of self-understanding, and in that sense I don’t really agree with Hellestal’s statement either, but it’s irrelevant to the potential risks in question here. You seem to be arguing that true AI somehow requires us to “understand” and fully model the functional mechanisms of the human brain. As I pointed out before, this is nonsense. We may or may not still fully understand how birds fly, but I assure you that the engineers at Boeing don’t care. We are, I believe, on the cusp of discovering that meat-based intelligence is spectacularly inefficient compared to silicon.

That is first of all a terrible analogy. If you can’t run a newer OS under a VM on an older one it’s only because of trivial limitations like memory. There is absolutely no intrinsic reason that you can’t run Win 7 in a VM under XP. But the more important point is that, if I’m understanding your argument correctly, you are once again invoking the argument that it’s impossible for us to build a machine equal to our own intelligence, let alone greater. The emergent nature of intelligence makes this absolutely false. The fact that we’ve already done it in very specific domains (which many claimed was impossible – witness Hubert Drefus’ arguments against the possibility of really good chess-playing computers back in the 60s) is a good indication that it ought to be possible across a broad spectrum of domains.

That “mimicking” thing again. Again, ask the aeronautical engineers at Boeing whether they care about the air-speed velocity of an unladen swallow! :smiley:

Exactly right. The engineers who design computer disk drives don’t know much about the details of the processors. The engineers who design the processors don’t know much about the power supplies or monitors. The ones who design memory chips have their own specialized expertise. And my kid who puts it all together in the basement gets a working computer without understanding the details of any of it. A computer that, indeed, no single person in the world fully understands all the details of. And then, that computer connects to the Internet …

The difference between a chip in a calculator that can add 2+2, and the kinds of computer that look at your net worth, pull your credit record, compute your credit score and assess your risk and then turn down your bank loan and then maybe later beat you at a game of chess, are only differences of degree and complexity.

Uh huh. So when you are trying to build a version of a machine that would figure itself out, you are not trying to build a version of a machine intelligent enough to understand itself.

Please explain to us how this is possible. because until you can do that the rest of your post is gibberish.

You should actually read the post before commenting on it. I never said it’s possible, I said I disagree with that part of Hellestal’s statement – the whole idea that AI is somehow premised on a machine being “able to understand itself” (whatever that even means!) is fundamentally wrong. Yet this is the entire crux of your argument.

Phrased like that it’s something of a non-sequitur, but I think I see what eburacum45 was getting at.
For example, we’d presumably be able to link artificial minds in some way. Not necessarily a collective mind, just any kind of link that would be more efficient than speaking or writing. While it doesn’t necessarily follow that such a link must be possible, it seems a very safe bet. And right away the artificial minds are doing something human minds cannot do.


A general note for this thread: note that human progress had already accelerated a great deal before we invented computers. How? By for example the invention of the printing press and the scientific method.
This shows it’s not just a matter of raw smarts; anything that can improve the generation, accumulation, storage, distribution etc of knowledge can boost progress.
So my personal bet is that artificial intelligence will be a game changer very quickly. Pretty soon we’d be able to achieve more in years or even months than all human progress up to that point. Just because of all the efficiencies in processing knowledge / ideas.

What happens after that though; whether progress gets still faster, or remains the same speed, or even stops dead (because it turns out that there’s not that much to know, and not much that’s physically possible), I can’t say. And nor can anyone else.

Predicting exponential anything is a fool’s game. Certainly jazzes it up though.

The process of iterative self improvement already started when people discovered simple machines to improve their own strength and then they built friggin’ pyramids and skyscrapers, or that you can free your mind up with this fancy invention called writing. Computers blow humans away in many tasks and are getting better all the time. We use them to help us figure out things we never could on our own, in engineering and design, drug research, modeling, you name it. Like the man said, computers are a bicycle for the mind.

Improving on our mental performance shouldn’t be too hard. The average person can only keep about 7 numbers in working memory. A pocket calculator is better than that and it’s as dumb as a brick. I wonder what method will happen first: genetic engineering, plugging in the cybernetic equivalent of a RAM upgrade, or a vaguely human-ish AI that jumps ahead due to sheer computing speed.

Help me out here - if we build a machine that is only just as intelligent and knowledgeable as its builders, does it not folow, from the fact that we know how to build said machine, that the machine also know how it is built? Does that not mean it does understand itself?

This is putting aside any notions of “emergence” which I believe to be the wrong way to build an AI.

As the rest of my post indicates, I agree with this sentiment. The first human-level AIs we make will be very slow, and we might want to slow them down even further in order to analyse their functions at leisure.

As MrDibble posits, if we can make an AI as intelligent as ourselves, then that AI can understand itself well enough to build a copy of itself. It’s not a given that it this is possible, and the idea that you can kind of challenges Godel’s incompleteness theorems.

It’s possible that the workings of intelligent creatures can’t be described by a self-consistent mathematical system, so it wouldn’t fall under the incompleteness theorems. I can see that being possible. But an AI that can do that would probably be based on something very different from our current computing at its core. Like I’ve said before, we might have to send our strong AI to school for 18 years, or it might take over the world before dinner on the day it’s invented. We don’t really know which is more likely at this point.