The problem I have with the singularity

Whatever happens, an exponentiating development curve is impossible, as coremelt points out. Quantum-level processes will result in a lower limit on size, for processors, and the speed of light will place a limit on processing speed. Waste heat would be an even bigger problem. Imagine Mijin’s concept of an array of linked human equivalent brains; to link them efficiently (in a way comparable to the linkages inside a human brain) you would need to provide vast numbers of interconnections, increasing geometrically every time you add another brain. Eventually you have a huge ball of interconnecting links, with a thin film of human-equivalent minds on the outside.

Such an entity would be remarkably competent compared to a human mind, but it would be impossible to continue increasing the number of linked minds without diminishing returns. Any definition of the singularity that includes an exponential curve is physically impossible.

There’s a saying , attributed to a sceptic attending a convention of futurologists, who asked "whenever this AI machine becomes sentient and wants to go online, who is going to crawl under the table and plug its cable into the modem?

The answer is ‘anyone who thinks they might benefit from this action’, which would probably include at least half of the people at that convention.

That “humans will actively help it do that” is probably a justified assumption.

This is one of the ways in which technicians are the masters of technology only in the sense that fish are masters of the water.

Human hardware is not designed to be easily understood, reverse engineered or malleable. An ai would be all those things. Plus the financial incentives of ai are high. A machine that can solve problems, understand economics, innovate, etc better than competitors will provide incentives to keep investment going. I’m sure there are a lot of security and defense benefits working ai as well.

Even if all we did was augment human intelligence (maybe triple it) that could bring about a singularity itself as human problems do not grow in complexity from generation to generation while our knowledge base and problem solving abilities continue to grow (due to more researchers, better diagnostics, better base knowledge, better communication tools, etc). Humans ‘solved’ many problems in the 20th century (just so long as you have the wealth, infrastructure and knowledge to nice those solutions). The same will continue to happen. Our problems will remain just as complex while our problem solving abilities will continue to grow. Eventually solving abilities surpass problems, and probably leap past them.

Kurzweil talked about the physical limits of ai, but he felt that by that time ai would be so advanced that life would be totally different. Even a commercial device a few times smarter than a human would likely revolutionize earth.
Also a single ai would not improve itself, I’d assume it is like humans. Thousands of humans work together to create ai, then thousands of ai work together to create better ai.

The problematic assumption is not that technology will itself further the development of technology, in an exponential growth. The problematic assumption is the assumption that this is in any way new, different, or exciting. We have had technology which allows mental feats beyond the capability of any human for millennia now, and it does in fact accelerate the development of new technology. Everyone’s already used to that, and nothing is changing that pattern. The “singularity” is misnamed, because it’s actually a horizon, not a singularity: The horizon is some distance off, and you can’t see beyond it, but once you get there, the horizon is still just as far away as it always was.

There seem to be, in my view, a number of fallacies running through this thread and I thought I would try to succinctly sum them up because perhaps my earlier comments weren’t clear – or if anyone disagrees with them this ought to make it easier to say why:

[ul]
[li]The fallacy of necessary replication: that in order to create truly intelligent systems, we need to understand and replicate the human brain. This is simply not true. We already have demonstrably intelligent systems in narrow domains. Intelligence is defined by behaviors, not the mechanisms that achieve them, and not necessarily by human-like traits. The analogy with birds is quite apt – Boeing engineers really have no interest in how birds fly, because it has no relevance to the design of jet airliners, which also fly a lot faster and farther than a bird ever could.[/li][/ul]

[ul]
[li]The fallacy of complete understanding: that in order to create intelligent systems, we have to be intelligent enough ourselves to fully understand them. This ignores the emergent properties of complex systems, namely that a complex system assembled from simpler components can have new and qualitatively different attributes than any of its component subsystems. Many complex technological systems are like that, but ultimately those emergent properties can be things like intelligence and sentience. It’s similar and somewhat related to the idea of levels of abstraction that is used in the design of complex software, which otherwise would just be too complex to be built. The analogy here is assembling a personal computer from component parts. Or, back in the old days, building a computer processor from simple logic gates. A few of those gates would build a simple calculator, many of them would build a programmable stored-program computer, a qualitatively different entity.[/li]
Or taking a roomful of standard servers, disk drives, and a large number of different software components and building an AI machine that could beat both Ken Jennings and Brad Rutter at Jeopardy.
[/ul]

[ul]
[li]The fallacy of pulling the plug: that computers, intelligent or otherwise, cannot be a threat because you can always pull the plug. All it takes to make such an action impossible is for the computers to provide benefits that eventually make them entrenched and indispensable. This has already happened. Our civilization can’t exist without computers. Far from pulling the plug, we protect them in fortified data centers, replicate them for redundancy, and coddle them with clean filtered power and backup power systems. [/li][/ul]

The problems with all your fallacies is that the Singularity doesn’t even need an AI to happen. I’m aware that this is the original idea that drove it into a popular concept, but it is not the only way one could happen. It may not even be the most probable way.

We know now that the basic building blocks needed to make robots with powerful Turing complete computers to drive them are extremely common. The crust of this planet, the moon, countless small asteroids and comets, other planets, all contain the basic metals and 4 way covalent building blocks (carbon and silicon).

We also strongly suspect that a turing machine can efficiently model a human brain. (and even if it can’t, the building blocks to making quantum computers are almost as common as the ones needed to make turing machines) We also know that a human brain is sentient and can design robots and better computers.

So the situation we’re actually in is near infinite petri dish full of nutrients that we haven’t quite worked out how to explosively replicate across.

All the Singularity models are based on this fundamental fact. The reason an AI can cause exponential growth is because the best possible arrangement of atoms to form a computer is probably many orders of magnitude faster than anything humans have invented so far, and because we’re sitting on top of an entire solar system full of resources we cannot yet tap. The physical materials are all there, we just don’t have the tools to use them.

So there’s actually 3 Singularity mechanisms I currently know about :

  1. We invent a technology called “atomically precise manufacturing”. This is a set of machines that can make a large array of products where the position and bonding of every atom is controlled. Living biological cells are capable of this, so we know it can be done. Once we have such a set of machines, those machines can be used to make more of themselves. They will nee raw materials and energy to drive them, but our APM machines would be able to make robots to mine for more materials and power plants to make more energy.

  2. We invent a technology called “human uploading”. This is a technically straightforward, though expensive way to build a sentient machine. We cut apart a human brain with special saws and scan the slices to extremely high resolution. We build a massive complex of custom ASIC chips that model each and every synapse using a state machine. This creates a form of “AI” that has predictable performance and goals, as it’s just an emulated human. The emulation has to run at much faster than realtime to be useful. So you could take, say, a critical person like an engineer or talented corporate executive and speed up their rate of thought by a factor of at least 1000 times. Real life projects are often rate limited by the serial speed of key people, and this would overcome that limitation. Also these emulated people could be modified to make them far smarter and more efficient.

  3. We develop a fully synthetic AI, like above.

Here’s the interesting thing. All 3 of these “seed” technologies that start the Singularity rapidly lead to the development of the other 2.

  1. If we have APM, we use the vast factories of self replicating “nanoforges” to build the equipment and computers that can perform human uploads. We also can build far larger supercomputers to experiment with evolving a synthetic AI.

  2. If we have human uploads, they work on developing APM and synthetic AI at far faster than human speeds.

  3. If we have a synthetic AI, we ask it to perform tasks that help us develop human uploads and APM. If it refuses, we don’t give it access to many resources or we shut it down, and we build an AI that will help.

You are being fooled by the considerable effort we make to hide complexity from users. One example - I went to a panel on system test, where people from consumer electronics companies said that your ability to easily hook up components of a stereo together is due to a large amount of engineering and standards development. Example 2: if you hooked up wifi or a peripheral to early PCs you had to go through a lot of steps, including setting dip switches. Today you more or less turn it on and type a password. Operating systems are so big partly because they support doing this for you. Their complexity reduces your complexity.
I’m involved in microprocessor design and bring up. Even if you consider the instruction level architecture simple, there is nothing simple about the design and fabrication of this stuff. Unless you consider quantum effects, chemistry, mechanical engineering, materials science, electronics and computer science all interacting simple.
Chips in calculators are a lot more complex than they have to be to add 2+2. Them being programmable allows customization, and allows economies of scale. But an arithmetic logic unit sits inside the processor chip,while it is the simple calculator. Programmability is fundamentally different from hard wiring.

Well, that’s pretty wildly speculative stuff. I was addressing what I consider to be the relatively near-term prospect of strong AI. FWIW, I’m with Chronos (and also people like Steve Pinker) in not having much belief in this alleged technological singularity. Nor do I think brain emulation is of much use in AI, though it may have applications elsewhere.

I’m reminded that both of us made some interesting posts here on that topic, which also touched on John Searle’s ridiculous “Chinese Room” argument, in which Searle tries to refute the computational model of the mind by arguing in effect that computational systems can never exhibit true intelligence. What he actually proves is that philosophers like him and Dreyfus should stop meddling in things they don’t understand. :stuck_out_tongue:

Just turn off the electricity. How can AI insure that not happen?

I don’t disagree with anything you said. I’m not quite sure what you mean by “you are being fooled by…” or if you’re trying to refute anything I said. The point I was trying to make, in a somewhat oversimplified way but nevertheless fundamentally correct, is about the emergent properties of complex systems. Clearly, such assemblies are only possible because of careful adherence to interface standards. It’s much like in software design using levels of abstraction, it’s crucial to have well-defined bounded functionality in the layers and well-defined interfaces between them. The essential point is that complex systems can have much more sophisticated – and qualitatively different – properties than any of their component parts. This is a central concept to AI and the mind, and particularly to strong AI, AGI, and to the attributes that we associate with higher levels of intelligence.

Humans are pretty defiant. I don’t think humans would take orders.

The levels of abstraction argument I agree with. What I was disagreeing with was that things were fundamentally simple at the lowest level.
Now, I suppose there are emergent properties of anything you put together. But I’d distinguish properties that were designed to emerge from those which emerged from the combination of smaller elements.
But the real question is whether intelligence is emergent. It is a common sf trope to have big enough computers manifest intelligence. But we’ve no sign of this, though we have computer systems bigger than just about anything imagined in years past. And when Internet of Things shows up, it will get even bigger.
It appears that intelligence has to be designed - designed in the evolutionary sense of appearing accidentally and having reproductive advantage. Increasing intelligence appears to have some. Self awareness, the part that got added to our brains, seems to, though the jury is still out.
Sure an AI will be composed of smaller pieces and emergent in that sense, but I doubt it will be emergent in the sense the Singularity fans use.
I took AI from Pat Winston in 1971, and we used a book from 1960(!). Most of the AI applications discussed in that class have been developed, many on our phones. Hardware is three or four orders of magnitude better. But real AI doesn’t seem a lot closer today than it did back then.

Assume AI never happens, do you think a singularity event still happens? It would seem like if we had endless thousands of distinct narrow AI tools combined with enhanced general intelligence due to brain implants that would still change the world.

What’s your reasoning that we would not be able to eventually replace the weak link in this scenario? The weak link being the slowly dying mess of gello like brain tissue that we all depend on.

It is a hypothetical question about whether a singularity could happen without strong AI. I’m assuming if natural selection can design general intelligence independently in multiple species, then I’m certain humans can do it too (but I don’t know what a realistic timeline is). It is more a hypothetical question.

If I said “simple” I didn’t mean it that way. Intel processors for instance contain hundreds of millions of transistor equivalents and some contain more than a billion – that’s hardly simple! I was trying to illustrate the concept of building more complex systems from simpler ones and ending up with something qualitatively different. Perhaps a better example would be building an entire processor from logic modules like these, which are little more than basic logic gates (AND, NAND, XOR, etc.).

Sometimes that’s not a meaningful distinction. Like in the example above – those logic modules are completely generic. A handful of them could be used to build a test instrument or to operate a light display, or you could take a very large number of them and build a computer with capabilities that the designers never imagined.

I think the issue is that powerful computers are a necessary but not sufficient condition for AI. It’s not going to come about by magic, but sufficiently powerful computers have been the necessary platforms for applications like Watson.

Yes, intelligence does have to be designed in some sense, but that could be interpreted as a misleading truism along the lines of “a computer can only do what it’s programmed to do” and the idea expressed by some here that you can’t create an intelligence greater than yourself. This is where emergent properties become important. The creators of the chess champion Deep Blue were not themselves chess champions, and probably not even very good players. But they knew how to build and populate knowledge representations (in humans we would call that “learning”), how to apply heuristics to optimize search trees, and just generally how to write efficient algorithms. Other folks put together super fast hardware, and it all come together in a machine that beat a chess grandmaster. It’s interesting that Garry Kasparov tried to accuse IBM of cheating via human intervention because he said he saw “deep intelligence and creativity in the machine’s moves”!

You’re probably familiar with MacHack, one of the first decent chess programs, which came out of the MIT AI lab in the 60s. It was written by Richard Greenblatt who, IIRC, was a system programmer at the lab, and in any case hardly a chess champion, either, and it ran on a tired old PDP-10 (originally on a PDP-6). I could never beat it, but I did learn a lot about chess from it because it was so damned aggressive! Like Kasparov, I could swear I saw cunning in the damn thing!

It will be emergent in the sense that it will be – indeed, already is – the result of many different technological components contributed by many different skill sets. As with all such systems, the results will be completely unpredictable, and many will be adaptive and self-improving. That said, I’m not a believer in the “technological singularity” trope.

That is really cool. As you know Winston worked under Marvin Minsky and then ran the MIT AI lab for a long time. They are both among the AI greats.

But I don’t agree that AI hasn’t advanced. I would agree that many of the predictions made back then turned out to be far too optimistic, and that AI seemed to hit a brick wall for a while after making rapid strides. But we can do things today that were just impossible back then – nothing like the Deep Blue chess machine could have been built then, and nothing like Watson, which is being commercialized. I would consider the Watson-derived applications to be true commercial AI, and IBM is betting quite a lot on it.

If the AI has “hands” – working robots to go out and do things for it – then it could build its own generators. Same for plugging itself into the modem: if it has a manipulator arm, it can do things for itself.

And, right now, there are a lot of robot arms. They’re even doing surgery. We’re on the brink of self-driving cars. Isn’t it a bit unrealistic to imagine the AI as a “brain in a bottle?”

It’s not necessary to invent sci-fi scenarios of physical battle. Computers have been essential to our way of life for half a century now. We cannot turn them off. We spend large sums to protect them. Subject already covered here under “the fallacy of pulling the plug”.