Once several breakthroughs have been made, then a paradigm-changing shift can happen quickly.
Unless everybody is wrong, though, none of the true breakthroughs needed exist yet. And if that’s true the chances of the singularity being just out of sight get tiny in a hurry. If a breakthrough in AI is made in a lab tomorrow, it would take more than five years to ramp it up to commercial manufacturing and distribute it to enough of the masses to change the world. Anything else is the realm of movie magic, where one computer gains consciousness and taps into the web to spread itself to every computer. That’s about the only scenario guaranteed not to occur.
I wouldn’t say that. I wouldn’t bet money, but I could forseeably see a, say, reasonably advanced search engine having emergent behavior. It would probably be rather alien and it may take us a while to realize it IS intelligent, but I wouldn’t rule a very powerful, common, AI-based program like a search engine out from becoming the first “real AI.”
This is all I’m trying to say. (I agree with the other stuff you wrote in that post, too. Just excerpting this part for emphasis).
And, since evolved (or other non-human-cleverness-driven) solutions can sometimes produce better results, it’s not unreasonable to think that as we have more cpu cycles devoted to these sorts of problems, those sorts of solutions will become more and more common. If we assume that an evolved solution is, say, four orders of magnitude less likely than a human at coming up with a good algorithm, then there’s a simple solution to bringing them to par: build 10000 computers. Human cleverness has beat out natural evolution in the “figuring out cool solutions” arena because natural evolution works so slowly. But Moore’s law is constantly turning up the dial on simulated evolution.
I think Kurzweil is too optimistic in general, but, going back to the post I first responded to, saying something like “We aren’t nearly that close to understanding how the human brain works” is missing the point entirely. We don’t necessarily need to understand how the brain works in order to achieve what he’s claiming. We just need to be able to keep making more processing power and more detailed simulations. And the further we progress on each of those paths, the less that human cleverness becomes a necessary component of progress.
With sufficiently-large (especially many-dimensional) search spaces, stochastic algorithms like genetic algorithms are pretty much the only game in town, since it quickly becomes impossible to design an algorithm that will just “zero in on” the best solution immediately. With space-based gravitational wave analysis, for instance (where the dimensionality of your search space can be in the millions), it’s all done using Monte Carlo Markhov chains, which are similar in some ways to genetic algorithms.
Well, me neither, I never made it out of Husserl and definitely have no background in AI. But it was a really interesting read for me.
Ah, I think you’re looking at representation differently. Maybe… construing it too narrowly? Any transducer “represents” in that sense (under realist interpretations) but this is not the concern. The concern is that to the cognitive agent engaged in interacting with the world, the transducer is not a representation of the world, it is the world. The point is the sort of subconscious oneness that we experience when we master actions (speaking, running, driving). Focusing on the transducers (eyesight, various nerves in the skin, muscle motion) as intermediates lead us to fail to perform well. It is only when we ignore their role as tranducers and live in the transduced world—abandon representation, or maybe instead reify it?—that we achieve our aims. The representation is an impediment to activity, not a substrate to cognition. It is not a question of using photons to stimulate a semiconductor and generate a voltage, it is a question of representing the world through that voltage by imposing a frame (set of rules) as the substrate upon which AI would act.
I disagree with Kurzweil’s prediction that we will have human level AI (either simulated or replicated processing techniques) by 2029 (or 2045 if he changed his guess).
And, how the brain works can be important to this discussion - if the brain performs 10^27 operations per second (based on quantity of microtubules in brain) instead of 10^16 then he better add couple years to his guess.