With only a passing knowledge of what the computer Singularity is, and having heard Ray Kurzweil’s prediction that it will come in 2045, I must ask: why will it take so long?
Stuff like this is why I started my other thread about how tied-up and purposely slowed-down science is.
The technological singularity actually isn’t a very well defined term (though it’s well defined enough to be at least moderately useful). Kurzweil’s definition, I believe, is the one where human intelligence is augmented to a very high degree such that the intellectual (and by extension – political, social etc) landscape is going to be almost unpredictably radically altered.
The reason is that while we’re gaining an understanding of AI and neuroscience, there’s a hell of a lot that can go wrong. Maybe once we demystify the human brain it turns out to be even more of a morass of craziness underneath? Maybe for AI to work we really DO need some sort of “soul?” Maybe AI is just so complex that creating it will be more difficult than projections persist? Maybe if we go the neural implant route the body will just reject them and attack the brain at that area leaving us back where we started?
Basically we’re juuuuust starting to break into the field, neural implants are just getting off the ground and come with their own giant host of problems. I do AI and we’ve developed a lot of very nice techniques for bringing stuff together, or summarizing databases of information. And while we have some impressive learning techniques, right now a computer that can learn and adapt to new situations is only possible in the most limited circumstances.
Basically, there’s a lot of hurdles to overcome to get just ONE avenue to work, and at the point we are in technological development, we’re not even at the point where we have a clear understanding of what all the hurdles even are.
I think it is because it needs commercial fusion power, which has been thirty years away for the past fifty years.
And if you only have a “passing knowledge of what the computer Singularity is,” on what do you base your feeling that it should come faster? It’s 2012, and we still don’t have robot butlers or flying cars – futurist’s predictions have a way of being both overly optimistic and delightfully wrong-headed.
I notice Ray Kurzweil will be 97 in 2045. I think the general rule on making predictions is to make sure they are so far in the future that you will either be dead or no-one will remember you even made a prediction. 2045 sounds a perfect match for either scenario.
I am fairly sure since I first heard about the concept of the singularity the date it will happen at has been advancing further into the future. It’s a cool concept and logically it seems to make sense it could happen but I think it’s not the certainty that some like to portray it as.
I was trying to stick to an analysis of tech hurdles but… yeah. Alan Turing, smart as he is, predicted we’d have AI as good as a human by now. The chances of a singularity happening by midway through the century are about as good as all 10 thousand of those predictions about when we’ll have flying cars and cold fusion.
Actually, many have accused him of quite the opposite. Kurzweil takes dozens of pills every day and has written a couple of books about increasing longevity.
So perhaps the temptation is to set the singularity date for some point he could conceivably live to.
He could well still be alive but remembered? 30 years is a long time in popular culture and I would not be surprised to see them disappear into obscurity in that time. In fact I would be pretty amazed if they were remembered.
The OP should do some reading on the subject, although just starting with Technological Singularity on Wiki ought to prove that the assumptions made are totally wrong.
To put it into a sentence, forecasters have always been entirely wrong in thinking that creating machine intelligence or consciousness was an easily solvable task. The more we learn about the subject of consciousness the more we realize that we know next to nothing about how our brain works and literally nothing about how to replicate that artificially. Gauzy hand-waving about the machines somehow automatically becoming self-aware when they had enough processing power is now laughable.
What’s also funny is that the definition of a Singularity is a place where no predictions can be made. The original concept was that so many changes would occur so nearly simultaneously that the difference would be qualitative and not merely quantitative. That’s a pretty interesting concept in itself and is worth discussing. But by their own definition anybody who talks about what happens after the Singularity is full of it.
The future is guaranteed to be different, in ways literally no one expects. But the Singularity looks less likely every day, not more so.
Also, Kurzweil, while probably the best known futurist, is quite probably no better than a layperson at actually predicting future technology.
FWIW, I always treat futurists as technological cheerleaders. They try to inspire people and provide awareness of new (current) technologies, but there’s often very little actual substance to what they predict.
Also, it just doesn’t feel like we’re on the home straight.
I appreciate that an exponential increase in technology means that even quite close to the singularity there will still be vast changes ahead.
But our rate of technological progress is painfully slow IMO (though obviously I have huge respect for those actually working to advance our technology and knowledge).
Certainly doesn’t seem like godlike powers are around the corner.
If you teleported me back ten years and I could show off some of the latest tech, what could I take / talk about? Me2002 would find smartphones one of the most boring and obvious developments (I’d be quite impressed with the back in time teleporter though).
This is what Kurzweil did, you tell me how accurate it sounds:
Neuroscientists:
We don’t know how the brain stores memories
We don’t know how many bit states and what computational power each neuron has
We don’t know to what extent other cells affect computation
Even if we did know these things we don’t know what the hell consciousness is and how to achieve it
Furthermore, computer scientists have been working on AI for 50 years and are barely scratching the surface
Kurzweil:
Ok, so that sounds like 35 years until you get it all figured out
Kurzweil’s claim is that none of the above is necessary, because we’ll simply be able to simulate a brain to a sufficient level of complexity.
This isn’t totally unreasonable. You don’t have to understand how something works if you can build a sufficiently complex simulation of it. If we can build a good enough computational model of the physical processes in the brain, that may be good enough. And if you can make a good enough simulation of the brain in year T, then you can make one 4 times faster in year T+3, and 100 times faster in year T+10.
Now, maybe Moore’s law doesn’t hold, and we won’t be able to build fast/cheap enough computers to run the simulation. Or maybe there’s something special about the brain that we won’t be able to simulate.
Kurzweil’s suggestion is that instead of trying to figure out how things work through our cleverness and carefully designed experiments/systems (which is essentially what neuroscientists and AI researchers have been doing), we’ll get where we want to go just by throwing increasing amounts of data and processing power together and seeing what comes out. There’s some reason to think this might work. Some of the best progress in voice recognition and natural language translation (both traditional difficult AI problems) in recent years has been accomplished by doing just that.
Kurzweil isn’t claiming that we’ll figure out how to solve all those hard problems in a few decades. He’s claiming that we’ll figure out how to build a computer that will figure out how to solve those hard problems (including the problem of building better computers).
Which is exactly what you’d expect to happen, since it’s not actually a singularity: It’s a horizon. If someone in 1980 had predicted that, 30-odd years in the future, the technological landscape would so change how we thought that a man in 1980 wouldn’t recognize it, he’d have been right. Meanwhile, though, here we are in that far-off future, and we recognize it just fine: As we have moved on, so have our horizons. And in the 2040s, when people are approaching our horizons, things will look perfectly normal to them, and they’ll be looking forward to yet a further horizon.
Im in no way even close to understanding what all this really means however as Im reading it I remembered this story I just read about MIT research that has started to nail down where/how memories are stored. The implications appear to me to be that once better understood one might be able to create a memory then “imprint” it on another brain. Or maybe I really just dont understand. (Very likely)
Anyway it does look to me like this could be at least part of the puzzle we are looking at in this thread.
Related to and supporting that is this hypothesis/research regarding memory storage and possible computation taking place in microtubules inside the neuron:
If true, the brain could have dramatically more memory and processing power than previously estimated.
You got that right. When I took AI in 1971, we used a book from 1959 which said much of AI would be happening real soon now. When the 386 came out, USA Today said that AI was a done deal because of all that power.
Most of the AI projects we studied in 1971 have come to pass. They included chess as good as a grandmaster, solving complex calculus problems, some vision stuff, generating directions between two places, stuff we use everyday. But I’d say we’re not much closer to AI today than we were then. We might be able to build stuff that are good assistants (not good enough for the Turing Test, but reasonably helpful) but something to replace us - no way.
Copying is not understanding. Even if we could simulate a brain - which would involve being able to image neurons deep inside it - we wouldn’t necessarily know how it works. And I’m dubious. We can’t do the much easier job of a circuit level simulation of a processor or even a decent sized ASIC today, and that is a hell of a lot easier than simulating a brain.
We’ll have really good artificial planaria, though.