Is the concept of a technological singularity valid?

Wikipedia article: http://en.wikipedia.org/wiki/Technological_singularity

I find the methods used to prove the trend towards such a singularity to be flimsy at best; the “list of innovations” is heavily skewed towards the modern era simply because more minor inventions are in the public mind (there’s no universal standard for importance of inventions). Also, I have a feeling that there will be a plateau at the point in which many of the converging factors, such as processing speed, show a singularity.

Thoughts?

Faith in the inevitability of a Technological Singularity strikes me as a religious belief. It’s an inverted creation myth: instead of postulating a supernatural being who created us, it predicts we will create a unique godlike Artificial Intelligence which will take over the world…

So far, AI research has produced some isolated achievements but little in the way of technological breakthroughs. The faulty assumption made by Singularity enthusiasts is that acceleration of hardware capabilities will precipitate similarly rapid developments in software. To the contrary, it has made it practical to produce ever more bloated and buggy systems that have become increasingly unwieldy and mutually incompatible.

The last major commercial innovations in software technology happened 25 years ago (e.g. C++ compilers, graphical windowing interfaces, relational databases). Since then, the software industry has grown much larger, but once a critical mass of installed software is reached, it becomes much more difficult to introduce new innovations in software design. In stark contrast to hardware development, software seems to follow a law of Technological Saturation.

I do not think conceding the possibility of a technological singularity is a religious belief – asserting its certainty would be, and denying the possibility could be.

I don’t think a singularity is possible without the development of some sort of strong AI. I am not sure whether or not it is possible, or when it will occur if it is possible. It does not seem to me to be outside the bounds of reason that it could happen in the next 30 years. Or it could take 300 years. I don’t know.

However, all Vinge did was look at the numbers which indicated a vast increase in technological innovation around 2030, and try to figure out how it might happen. Interesting that a tech which could conceivably lead to a singularity (AI) just happens to have been invented…

Well, let me quote this article by Vinge :

I’ve always been more enamoured with Vinge’s later articles that propose a second method–rather than AI (Artificial Intelligence), instead it’s possible we could perfect a man-machine interface of some kind (and IIRC, we’re making some progress in converting nervous system signals back and forth from digital in realtime) and thus develop IA (Intelligence Augmentation), combining human intellegence with computer memory and processing speed.

Seems altogether more likely–to a certain extent, hardware is easy and algorithms are hard, in that you can often improve hardware with sheer brute force, while software requires breakthroughs.

What the singularitists fail to take into account of, IMHO, is that technical innovation becomes increasingly difficult the more advanced you go and that acts as a natural brake on progress. Early on in the development process, it’s easy to make advancements since your essentially picking the low hanging fruit. Much later on, however, things become difficult since you need to think of things which 100 years of clever people have missed and you need to either be especially brilliant or to work in relatively unimportant fields which have not recieved much attention previously.

I’m starting to think that even the prophets of the singularity don’t really believe in it themselves. Kurzweil has been using that deceptive chart of average human lifetimes since the 1800s for well over 10 years now, conveniently ending in 1990, even though I can point out at least a dozen times where opponents have quite rightly claimed that a) the maximum human lifespan has stayed remarkably fixed at 120 and b) average lifespan plateued in around 1990 and may have actually dropped very slightly in the last few years due to obesity. The fact that he hasn’t responded to these claims speaks to me more of wilful deception rather than genuine misguidedness.

As far as he’s concerned, he’s got it pretty sweet so far, cushy lecture circuits, the occasional book, practically raw adulation from tech-fetish magazines like Wired. Why would he want to give that up and admit that his ideas don’t hold up to the flimsiest of scrutiny?

I agree. I’ve never believed that the singularity would replace humans, only be an evolutionary step.

There are already mind-machine interfaces. My friend plays midi on his EEG helmet that was supposed to be for treating ADD.

Shalmanese: I think you’re flat out wrong. Advancement is moving so quickly because of computers and the sheer number of people alive today. As I said there are already mind-machine interfaces. Nanotech already exists. What’s really holding us back is the culture not being ready for the advancements, not the technical aspects. Our advancement is limited more by our imaginations than by our intellect.

Erek

Well, we know strong AI is possible. Human brains are strong AI, they are constructed of ordinary physical matter arranged in particular ways. So there is absolutely no theoretical reason that we couldn’t build an artificial intelligence, it’s just that as of today we have no idea how to actually do it.

If we can build an AI as smart as a human, it would be pretty trivial to build one twice as smart as a human. And so on. Once you’ve got AIs smarter than humans designing the next generation of AIs then all previous rules of technological innovation go out the window.

However, I’m certainly not able to predict when we’ll get strong AI, if ever. What if civilization collapses? What if our brains aren’t smart enough to figure out how our brains work? But it seems likely to me that with enough scientific research we’ll get somewhere eventually, whether that’s 10 years or 100 years or 1000 years or 1,000,000 years from now.

Are there any arguments for a technological singularity that do NOT require technologically improved intelligence? All our advancements so far have relied mainly on what some call “extelligence”: ever more powerul and sophisticated ways of networking the brains we have. The main advances in the last 100,000 years have been speech, writing, mass printing, and computerized information processing. Could some extension of this process bring about a singularity without the need for superhuman intelligence?

I’d say there’s a lot more compatibility now than there has been in the past. You can write a newsletter in Microsoft Word and print it in OpenOffice. You can run the same program on a cell phone, a PDA, a Mac, and a PC.

Also, increased hardware capabilities make it feasible to run optimizing compilers, garbage collection systems, dynamic recompilation, and other tools to make software more reliable, better performing, and easier to write correctly. Gone are the days when you had to carry your source code around in a box and wait for off-peak hours to compile and test it; now your text editor can point out syntax errors before you’re even finished typing the line.

Ten years from now, I think you’re going to wonder how you ever left JIT compilation and runtime code verification off the list.

Lumpy, I fail to see the difference. Parallel processing is just as valid for making supercomputers as simply increasing clock speeds… Beowulf clusters of quite average computers are the fastest computers on earth.

What exactly does it mean to have an AI “smarter” than humans? It can process faster? Store more data? Computers already do that. Are we going to create some AI that decides it’s smarter than humans so it must destroy us? Are we going to have depressed AI robots sulking about because they have a brain the size of a planet and we make them open doors and take out garbage? Am I going to have to tolerate my AI dishwasher calling me “meatbag” every morning?

Probably not.

I would think that if we could create AI at least as smart as a human, it’s only a few steps away to “download” our own brain into the same kind of system. Kind of like taking your hard drive from your old PC, connecting it to your new Pentium D and running it under a better (or at least more feature packed) operating system.

IMHO, a “singularity” would be the point where there exists the potential for seamless integration of human brain and computer.

I think it would also imply that since the AI is a conscious entity, it can learn things, integrate them with what it already has learned, and develop new ideas and understanding about other things. Coupled with much faster processing speeds and much vaster knowledge than we already have, it seems likely that such an intelligence would outthink any genius in recorded history. Remember daVinci with his drawing of helicopters and tanks back in the Renaissance? Now think of somethign a LOT smarter than daVinci. Think of it turning its intelligence to the issue of making itself smarter. It’s not far down that path before we have intelligences that outstrip ours the way we outstrip insects.

Have you never watched The Jetsons?

Can I see a cite that shows that we are more intelligent than insects?

Human intelligence is hardly tapped, we have a lot of cultural mores that hold us back, our system of education could be overhauled tremendously and we all know it. We have barely begun to unlock the human potential. We won’t find an AI that is smarter than humans because any singularity is going to include the interface between human intelligence and the Artificial Intelligence. Human intelligence is far faster than that of any computer, it’s the training that we are held back by. We have computers to take a load off of ourselves, and to make it easier to verify results. There are human beings that can do massive calculations in their head instantaneously, it’s showing you the steps they took to get there that is toilsome, unlike a computer than can just print it out.

The singularity will be an agglomeration of human intelligences augmented by machines, it won’t be a machine replacing human beings.

Erek

A “Technology Singularity” is more of a theoretical construct. IMHO, it would actually go beyond machines integrating or replacing humans. It would be at a point where human and machine are so integrated and interchangeable that it would be impossible to tell the difference. It would be a “singularity” because all the normal rules we associate with being human go out the window. Maybe I want to upload my brain into an AI dogs body and be a Laborador for awhile…or a toaster oven.

Whether we ever reach such a state is another question. It may be that we can no more approach such a singularity than we could approach the collapsed star variety.

If you wish to maintain that insects are as smart as or smarter than us, you are the one who is making an extraordinary claim and you are the one who must provide a cite. My cite is that we can play the banjo. Ever seen a BUG that can play the banjo? I thought not.

I will grant you that. “Getting and spending, we lay waste to our powers” and all that.

Not necessarily. If an AI becomes self-aware and then intelligent on a scale we cannot measure or comprehend, it probably won’t want to talk to us, particularly, any more than we really want to talk to insects. We wouldn’t understand anything it had to say anyway.

Wrong. Human neurons intereact biochemically … we actually send chemical messengers from neuron to neuron to tell it what signal to generate. Computers operate at something far closer to the speed of light than we do. The speed of human perception and thought will be easily mimicked and outpaced once we understand how it happens.

I need not claim anything, I am merely disputing your induction.

If google becomes self-aware and then intelligent it will do so in a way where interacting with humans is a part of it’s life cycle, and the idea that it would act in some other fashion other than to interact with it’s creators is an induction that I feel will probably bear out false.

So you’re saying that electricity doesn’t flow through the nervous system?

I agree and disagree.

Advances probably are harder, requiring more educational background, more money, more cross-discipline information, etc., and humans are certainly a limiting factor.

But that is offset through the use of modern tools, methods and collaboration that were not possible or as efficient in the past.

The end result is that the rate of change appears to be increasing, and outstripping humans ability to incorporate into daily lives and cultures. That’s probably one of the biggest limiting factors, in my opinion.

No it would not be trivial because “smart” is not a concrete term that we can easily formulize or measure at that high level.

Intelligence is merely an optimization problem.
For a given set of goals (I’ll come back to the goal thing in a minute), it allows you to achieve those goals while expending less energy than if you did not have intelligence. For example, if the goal is to find food, then the inefficient method is to run around and check everywhere. The efficient method is to keep track of clues in the environment through experience, and remember where food is most likely going to be found.

If you had a computer intelligence that was able to take in more inputs and process those inputs quicker than a human, and work at a higher level of abstraction so higher level patterns can be included in the process, then you probably would have a system that could find food better than the human.

But programming in the goal is non-trivial.

If you gave the AI the same goals as humans, would you have something smarter than humans? Maybe, but only in the domain bounded by human goals, which is in reality a pretty small domain.

So what goals do you give this AI to make it truly smarter in the sense that it applies it’s intelligence to areas beyond human procreation and a handful of other basic human needs?

Also, is it possible to have an intelligence that is smarter with respect to solving every type of problem there is? That would require some contradictory goals (e.g. competition vs cooperation).

I think we will make significant progress in these areas, but IMHO, it is far less straightforward and trivial than most people realize, even if we do begin to match human capabilities in many areas.

That’s true for certain classes of problems.

But there are classes of problems that gain little from clusters.

For some problems humans are faster than computers.
For some problems computers are faster than humans.

They are 2 different tools optimized for 2 different sets of problems.

mswas and Evil Captor on insects
There are probably numerous areas where humans are smarter than insects, but I would guess there are specific brain functions in insects that are optimized beyond what a human has. Smell analysis? Modeling the environment through smell/chemicals?

Here’s two questions I would ask an AI to determine if it was smarter than me:

  1. Go ahead- impress me. Make my jaw drop with astonishment at how smart you are.

  2. Are you smart enough to get a really stupid person to understand something?

(you don’t suppose those two questions are related somehow, do you? :smiley: )