Technological singularity, why would it NOT happen?

One of the reasons I think AI researchers are barking up the wrong tree is that they are not including the important feedback loop of the program looking at itself. This is different from adaptive systems and those which change weights based on data. Your subconscious is smart but not intelligence, in that it solves problems without introspection, much like a smart dog does. Your conscious mind thinks about what it is thinking about.

BTW, data mining is an application of neural nets, not a driver of them. I’m unaware of many brute force AI approaches - they have traditionally been heuristics. The problem is that they are often applied to specific problems, and, when they are, algorithmic approaches nearly always beat them.

I suspect that the attempt at AI involves, in part, putting it into a box that needs to move itself around, needs to understand it’s environment and needs to perform certain important tasks for itself lest it ‘die’ by running out of battery power or is rewarded by ‘reproducing’ in some fashion. It may ultimately need some sort of driver that allows natural selection to evolve better and better programs. At least it worked for biological entities.

I was half joking when I brought up the 3 laws, but only by half. Almost off topic here but it might still be a good idea to see if there would be a way to integrate such laws despite the great challenge it presents. Actually, even were it possible they might not stop a singularity. Indeed, they might help produce it were AI goals to be similar to our own.

I think the most likely showstopper is that sometimes technological developments are simply not always as linear as Moore’s law. Sometimes they come with breathtaking breakthroughs early on and then go for decades with only incremental improvements. Perhaps Moore’s itself will hit a wall before long. Regardless, there is a long history of great developments that only live up to the hype for only so long, and it will take a perfect storm of various developments to bring on a singularity. Also common are naive assumptions which ignore other greater difficulties that arise unforeseen by the prognosticators. Ones which often seem pretty obvious in hindsight once they’ve been run into. The singularity, while perhaps a real potentiality, seems to be more of a ‘best case’ scenario to me than a hard certainty.

Careful now. That’s how you end up with the runaway satyric robot seen in Robot Chicken.

But biological evolution is very slow. There was a paper on what would today be called a GA in 1959 from a researcher at IBM, which evolved machine language computer programs, and got a few that actually did something. In my field there are GA papers all the time, but no one is using GAs for real work, since algorithmic approaches always win.

Someone, it might have been Gordon Bell, showed that Moore’s Law can be extended backwards in time, to well before the invention of the transistor. I suspect it will continue to hold when we move to a new technology also. One reason it works so well today is that technology roadmaps are based on it, so it has become a self-fulfilling prophecy.