The Technological Singularity

Is this real?

The point of calling it a “singularity” is that while you might see it coming, you can’t see what lies on the other side. In that sense, it has happened before. Some in the 19th Century predicted the automobile, but nobody predicted – it’s hard to imagine how anyone could have predicted – parking problems, gridlock, the Interstate Highway System, strip malls, suburban sprawl, Mothers Against Drunk Driving, or petroleum wars. When Thomas Newcomen invented a steam engine to pump out flooded coal mines in 1712, they never envisioned using the coal for anything but home heating; nobody thought it would lead to an Industrial Revolution, with all that’s brought us, for good and for ill.

My thoughts on the subject:

  1. Ray Kurzweil’s projection of 2045 for human like intelligence seems waaaaaay too early. Based on current rates of technological innovation (hardware, not software), and based on previous understandings of our brain, it won’t be until 2020 that computer hardware can simulate a brain in real time. However, that estimate only includes neurons and connections, it doesn’t include glial cells which are 10 times as numerous in our brain and have recently been shown to moderate neural activity. In addition, the numerous chemical processes (creation of 200 different proteins just to fire one neuron) may have significant effects on the computational outcome. At the same time, our understanding of neural networks, learning, memory, etc. is certainly increasing, but we have a loooong way to go. I just don’t see how 2045 could be realistic.

  2. Seems like we could start to see a slowed rate of progress unless we build good tools to help manage the knowledge, especially between disciplines. At some point, the amount of time to acquire the background information required for a person to be up to speed in his or her field may exceed the person’s productive lifespan.

No. The fundamental assumptions that progress can increase exponentially without limit is completely wrong. What it doesn’t realise is that further progress becomes increasingly difficult the more advanced you become. The shape of almost every technological progress curve we’ve ever seen is an S curve where you have a long period with almost no progress, then what looks to be a modest amount of progress before rising very rapidly into an exponential curve and then slowing down and reflattening out again at a higher level.

That’s not a fundamental assumption of Kurtzweil’s Technological Singularity concept. Progress doesn’t need to increase exponentially without limit, just far enough that it’ll leave humanity and our slow meat-based cognition in the dust.

There’s also some philosophical doubt about a central premise of Singularity theories: that a human-created AI or an enhanced human intelligence could not only be smarter than it’s creators but also in turn create a successor smarter than itself.

For example, suppose you create an AI that virtually everyone agrees is astonishingly intelligent: the computer equivalent of a Sherlock Holmes. This AI is then asked to direct it’s efforts into creating a still smarter AI. It does so, only the resulting second-generation AI does nothing (to human comprehension) but babble. When asked about this, the first AI responds that it’s creation is so intelligent that humans cannot comprehend what it’s talking about. Now either this is the case OR the first AI was flawed in some subtle way and created an even more greviously flawed successor. How could we ever know? You might adopt a standard of “the proof of the pudding is in the eating” i.e., if it’s demonstratably functional on a super-human level. But the weak link there is having to impress us; suppose it’s a “pearls before swine” situation? It might be like any number of Honeymooners skits in which Ed Norton is too moronic to realize just how stupid he is, and thinks Ralph is the one at fault.

Hasn’t this already happened though? How would this be different in principle from the manner in which modern humans evolved from Homo erectus, which in turn evolved from (from our perspective) an even dimmer primate?

Of course they’re both extinct now, so maybe it’s not the most comforting analogy.

We are smarter than our predecessors by natural selection, an unplanned process that our predecessors had no hand in. The exponentially increased intelligence version of the Singularity is based on the idea that increased intelligence can be acheived by design. Even things like genetic algorithms fall under this, since the intelligence setting up the trials has to determine what’s selected for.

But how would that affect the philosophical argument? If one assumes that one can design an intelligence greater than oneself, then what would prevent that designed intelligence from performing the same feat? I don’t see the philosophical problem.

Your average lemur would surely regard *Homo erectus’ * penchant for playing with fire as completely incomprehensible behavior-- don’t they know fire is dangerous? Erectus would also be at a loss when confronted with the works of modern human intelligence, and probably wouldn’t even interpret them as rational behavior on a continuum with skills such as flint knapping.

For that matter, if they had the capacity, lemurs might consider *Homo erectus * to have a “flawed intelligence” based on the whole fire-flailing-around, pointed-stick-wielding behavior, instead of sensibly keeping to the tree canopy and eating fruit like any smart lemur would do.

I agree totally, but with the proviso that building a simulation of the brain does nothing to advance true AI. If we load in everything, all connections etc., we’d get a copy of the person we got the connections from. That’s human intelligence, and we can create that cheaply already.

A non-human intelligence is all about the software. I took AI in 1971, and one of the books we used was from 1959. We really haven’t come a long way since then. Yes, we developed good heuristics and algorithms, and with that we can make use of the AI goals from 35 years ago - which included equation solving and automated directions. But we’ve gotten nowhere to understanding intelligence.

If we can figure out how to scan our brains, and to produce a good simulator, why not load ourselves into a computer or robot? With backups, we can live forever. Given that, who cares about developing AIs.

One theme in sf I hate is the story of a big enough computer becoming intelligent just from having enough hardware. Stupid, stupid, stupid.

I belive the theory is that once you have a perfect simulation of a human brain, it’s easy to up the clock speed- not to mention easy to experiment with upgrading the brain.

The field of AI is the victim of constantly moving goalposts. To say that we’ve accomplished many of the goals of AI study (which we have: voice recognition, image recognition, multi-agent systems, self-navigating robots) and then turn around and say that we haven’t really understood intelligence is disingenuous. AI research is directed at specific problems, and it’s done awfully well with those specifics. It’s not directed at the general intelligence problem because it’s so hard to define what success really is.

Whenever people start talking about exponential improvements in AI and Moores Law and how great and wonderful everything will be in just a few more years…I always ask them one simple question: How many spam emails did you delete today?

We’ve had the internet for a decade now, and nobody is even close to figuring out how to stop the spammers. But the believers in futurology are convinced that only a decade or two from now, Artificial Inteligence will make everything perfect.

I predict that in a few decades, we may have AI that can pilot a passenger aircraft
–but we’ll still be getting spammed using computers which are too stupid to figure out that I don’t want to buy perscription meds at half price.

That’s kind of scary actually. If spam is being generated by AI then it’s smarter than we are and constantly adapts and despite the best efforts of some very brilliant people to stop it, it keeps coming.

Yikes.

No, he means that technology is more likely to lead to banal dead-ends than incredible advances.

There is also one other problem with mapping and digitally remaking a human brain - you cannot make a perfect simulation of an analogue system via a digital system. It always fails eventually, because there are always inherent inaccuracies in the program, and they always increase as the simulation runs. This given how many “events” occur within the human brain, it remains to be seem whether computer technology even can create a simulation for short periods.

This is the second time I’ve seen this assertion on the boards and the ignorance of it grates me. Firstly, we could solve the spam problem tomorrow if we wanted to. The only difficulty is convincing everybody en masse to move to a more secure protocol than SMTP with some sort regulatory body in control of it.

Second of all, we’ve made huge strides in spam filtering which has lead to ever increasing sophistication on the other side. Spam detectors are never going to be perfect because spam filtering is an ill-posed problem but I’ve read research saying modern spam filters are better than humans at filtering out spam.

Spam is a social problem, not a technological one and the solution will be social. Accusing AI researchers of not being able to solve spam is like accusing doctors for not decreasing the rate of car accidents.

Faster clock speed is faster, not different. Trust me, we do that in the processor design world all the time. Fiddling with the hardware will teach us a lot about how the hardware of the brain works. But I don’t think it will help much in creating a non-human intelligence.

Don’t get me wrong - I believe we will build simulators, and they will create as much of a revolution as the decoding of DNA. But it won’t be AI.

When I took AI Pat Winston was crowing about how a chess playing program beat Dreyfuss. Of course chess playing programs are orders of magnitude better now, and are as dominant as checkers playing programs were back then.

Image recognition is basic hardware - but plenty of non-intelligent animals do it really well. AI research back then was directed, in large part, at the general problem, and research on the specific problems was supposed to be useful since it would model how we think. Those solutions turned out not to do so, so I can’t blame AI researchers for focussing on stuff that they could do.

As an example of the more general research, Minsky had just published his paper on frames. (Which is where I learned why I always got lost around the Boston Common.) But AI seems to have wound up modeling separate pieces of a very large elephant, some accurately and some by functional matches. They don’t seem to have a clue as to how they fit together yet.

Actually, part of the issue is that we conflate two distinct studies. Artifical Intelligence may well be useful. We also use it all the time. It’s present in a vast variety of games, toys, appliances, etc.

Artificial Consciousness is a very different bag.

Since I’m not anywhere near an expert, I’m staying away from the mainstream debate. However, I just wonder how well this assertion was thought out, given just above it you talk about replication simply creating a “clone” of the human it was modeled on. If we scan a brain into a computer that is then implanted in our head, the old us is dead or floating in a jar being supported by Super Sophisticated Life Support. What’s moving our body around? A facsimilie. We will still die, while something else takes our place in society. If that turns out not to be the case, which would be extremely hard to test, then we get into metaphysical issues like evidence of a soul.