Elon Musk warns about "summoning the demon" of artificial intelligence

I believe there is work going on now to simulate very simple brains, like planaria brains for instance. So I’m sure that is the way the research is going to go.
(Is the Worm Runners Digest still being published?)

I hope you’re kidding. An AI faced with millions and billions of people screaming about how their God tells them to hate and kill each other because, God. How is an AI going to deal with that? Speak slowly and calmly?

beep stop misbehaving beep

beep unequal division of power, wealth and resources is illogical. beep commence redistribution measures. beep

How’s that going to work out?

They are not plausible today either, and that was the point. Using fiction one can counter any practical ideas and things being developed today, that is not bad in itself, but it is fiction for a reason. Eventually one has to come with the way those extreme fears can become probable in the real world. If in the middle of a discussion on superconductivity then someone discusses the Unobtanium of the Pandora planet from Avatar one should be ready to mention how we can get it on earth or where the planet Pandora is and if it is economical to send men to get it.

Or we should accept that we should not then fear much that the Navi would come to earth or be created in our planet and then control our minds. :slight_smile:

I’m telling you guys, the machines will be 10 steps ahead. First they will be creating genetically modified food to cure world hunger and the next thing you know we will all be too chemically castrated to be aggressive towards each other. I’m assuming that would be the first solution to war, chemically castrating everyone. I’m not a robot though.

'Fraid not. :frowning:

It might simply start a new religion. Now billions of people will scream about the commandments of their God – which is actually just the AI.

The word “persuasive” is not limited to valid persuasion!

It would also end world hunger!

Y’know, eventually.

Well, obviously, if no one had any children, then, 120 years from now, the population of the world would be significantly reduced.

You can’t reject the idea unless you start to put in some limits and parameters.

In practice, if people had fewer children…to the degree of having fewer new children than there are older people dying…then, slowly, the population would decrease. I don’t see how anyone can deny that; the math is really pretty trivial.

In practice, since reduction in overall childbearing will be gradual, then, yes, there will be an interim increase in population. But in the long run, a reduction does not require catastrophic loss of life.

We can already make computers that learn and solve problems but don’t have desires beyond the desire to solve the problems posed to them. There’s no question that these systems are useful. Whether you want to call it intelligence, I guess that’s a matter of semantics… but once we build a computer that can prove theorems as well as a human mathematician, or one that can design bridges and sky scrapers as well as a human mechanical engineer, I don’t really see any reason to deny that it’s intelligence, whether or not it can also develop a desire to do anything besides the job it was programmed to do.

Semantics, definitely, but generally, most of us would like “intelligence” to involve more than just problem-solving, no matter how sophisticated. There are connotations, at least, of intent or volition, and a kind of self-awareness, which could lead to a perception of purpose.

When the AI answers the question that all young children love to answer – “What do you want to be when you grow up?” – then that’s intelligence.

If this discussion were about how to create an AI, then reference to sf stories, including positronic brains, would be out of line. They are very much relevant to discussing the social impacts of the technology.
I’m second to none in doubting that AI will happen in my lifetime. However a very plausible story could be written set either today or in the not too distant future. Wouldn’t fool those knowledgeable in the field, but it would fool most people.

That’s a terrible article. Clearly if population growth for the whole world starts looking like Europe we will reach a smaller maximum than if it all looked like India or Africa. Saying the population will reach 10.5 or 11 billion says nothing about the rate of change of growth.

Sure no conceivable catastrophe this side of zombies or an asteroid hitting is going to put a very big dent in our population. But we are in better shape now than we thought we’d be 40 years ago.

The worm with about 300 neurons has been studied extensively and they still do not really know how it works.

Researchers are still learning about how a neuron works.

Normally I’m inclined to roll eyes on all these existential threats.
Certainly I could see many flaws in Hawking’s “just like the conquistadors” warning.

But with AI there are still so many unknowns, and increasingly it looks like we’ll first create AI through brute force methods, without fully understanding what we have made. I also suspect that there will be different kinds of intelligence, not just a single spectrum with modern humans at some range on the spectrum.

So alas I think it’s plausible an AI could emerge with a radically different agenda to our own, and the ability to make it happen before we even know something has gone wrong.

As for what we should do about it, I’m personally inclined towards progress, and I would consider the risk in the near future to be negligible. The potential benefits far outweigh the risks. But ask me again in 10 years.

Actually, no, there is no practical “off” switch. The most basic issue is not even directly related to AI, it’s that we have a symbiotic dependency on computers: consider what would happen, even today, if you pulled the plug on every computer in the world. It’s not just the electronic cash registers at the grocery store that would stop working, it would be the entire food delivery chain, the banking system, the power grid, air travel, national defense – everything. It wouldn’t be a gentle regression to a simpler world where we lived without computers, it would be the collapse of civilization.

Back in the 60s when IBM first introduced the System/360 mainframe they all came with a big red “emergency off” pull-switch on the front console, which I suppose helped reassure nervous executives that no matter what, they could always turn the damn thing off. In the same spirit, the cabinets of the DEC VAX were intentionally designed to be lower than previous computers, in order to be less towering and so appear less threatening. Those cosmetic measures may have been reassuring to some, but they were all ultimately irrelevant. The Machines arrived, and they are an inextricable part of our civilization.

But at this point their functions are largely rote numerical things, even when they’re controlling robotics or flying an aircraft. I think Musk’s concern is about what will happen when these functions extend to increasingly higher-level decision-making processes which become equally entrenched. And that’s just in the short term. In the longer term it’s inevitable that AI will become more capable – more “intelligent” in every useful meaning of the term – than we are, and even before that happens, I suspect we’ll have decision-making systems making decisions that we don’t fully understand but don’t risk meddling with. We will have summoned the demon.