Elon Musk warns about "summoning the demon" of artificial intelligence

If we build an AI, it’s going to be because we want it to do something for us. It’s pointless if it doesn’t have the means to do that something. And the way it does that something could have huge implications for us, for good or ill. If the AI is intended for investing in stocks, it could mess up the stock market. If it’s intended for bringing us the information we ask for, then it can give information we don’t want to the wrong people. If it’s intended to watch over our nuclear arsenal, then it can go Skynet. And while we probably can design it so that we can pull the plug, we might not even realize we need to do so until it’s too late.

All that said, though, I’m not worried. Humans can do all of those bad things, too, and yet we trust each other (mostly).

You jest, but that also shows the many layers of safety and even the ones not made up but the ones that just by serendipity will block it, (for example the satellite will need to be in the correct position to stop the quick move you are talking about here.)

Really, IMHO the maintenance that many of the cutting edge AIs will require means that they would be stupid if they attempt getting rid of us, as I see it, there will be a lot of benefits for us and eventually a lot of humans becoming part of the change as the benefits will be mutual.

And of course the intelligent grid completely missed that and never alerted the owners -mild sarcasm-. Most of that science fiction missed the reality that there would be a lot of competing architectures and operating systems, the rampaging AI actually requires many steps and almost miracles to come to that.

Even the computers that are connected and useful have limitations, IMHO the feared AIs are impractical because they would require omniscience and omnipotence, there are ongoing projects to replicate the brain; however they are not very effective so far. What I see instead is what the practical guys are actually doing and I have to tell you that we do not have to worry about this because for starters, they are finding that insisting in making something omniscient it is not a very economical way to do this.

Think what you want, but in the Q&A section of the video in the linked article, a kid asks a technical question about a small model Hyperloop he built, and Musk debugs the problem in about 5 seconds.

The guy’s ambition exceeds his grasp, but better that than the reverse.

I heard a talk by a Google AI researcher recently who perfectly articulated the problem with this sort of thinking. Intelligence isn’t dangerous, desires are dangerous. You could make the smartest machine in the world, a billion times smarter than any human, and unless it wants to do things that are harmful to us, that’s no problem. And it will only want to do the things we program it to want to do (unless we foolishly program it to develop its own independent desires). On the flip side, we could build the stupidest computer in the world, with only one function: “push the big red button”. But if we program it to “want” to push the big red button and launch the nukes, well, we’re screwed.

People tend to think that if we make machines that are smarter than humans, they’ll also automatically have desires, like humans do. But there’s no reason to think that. Our desires (food, sex, sleep, etc.) all evolved way before our intelligence, due to more basic evolutionary pressures. These aren’t things that automatically follow from being smart.

We also can restrict a machines capabilities, as has been said upthread. A supergenius brain that’s not plugged into anything would be no more of a threat than a homicidal Stephen Hawking would be without his chair. If we’re dumb enough to build the ultimate AI and give it a desire to kill and internet access, then I guess we deserve our fate.

Is a capacity for pleasure really a necessary condition for AI that might prove dangerous to humanity? Wouldn’t an imperative for self-preservation be enough?

If anything, we should probably trust a (well designed, well tested) computer more than humans for many of those important tasks. The computer isn’t going to deliberately manipulate the stock market out of greed, or launch the nukes out of paranoia (unless we program it to be greedy or paranoid).

It’s coming whether we want it our not. If you’re afraid of it, you better build defenses, because there is no stopping it.

Actually, no. If it’s capable of learning and changing - and it would have to be in order to be useful - then it can develop either “desires” or a good imitation of them whether we program it into them or not.

But it wouldn’t be nearly as useful as one that is plugged in, either.

So, the AI says “sure thing boss,” gets you your pizza, says it is working on whatever you want it to work on, and zaps you when the satellite is in place. You sleep, it doesn’t.

To save money they’d automate the maintenance with robots designed by the AI. Not to mention, guess who writes the maintenance instructions? The AI would have to be self testing and will tell you what needs to be done.

who might decide to work together.

The AIs don’t need to be omniscient - just smarter than we are. That isn’t all that tough.

I don’t think we need to fear evil AIs. We have to fear AIs whose goals (and they will have them) may not be in our best interest.
Look at the Jack Williamson story “With Folded Hands” where the AI robots make our lives miserable all in our best interests.

The butterfly effect says (no idea how true it is) a butterfly’s wings can cause a tsunami on the other side of the world. A smart enough AI would know how to engage in behavior putting it 10 steps ahead of us. Acts we deem innocuous or like they are being done for our benefit would lead to it overpowering us and we wouldn’t know until it was too late. We can only make connections one or two steps ahead.

I sometimes use a laser pointer to play with the cat. The cat doesn’t figure out I’m the one in charge of the laser pointer, it just runs wherever I want it to run. A cats brain is incapable of comprehending concepts like lasers. We would be the same with an AI.

So just task it with a goal that is in our best interest, like reducing human error.

Hey, wait a minute.

I do think that a lot of this objections are like summoning magic, besides different technologies we are assuming a 100% integration with different systems. Not likely.

Again, assumes that there will not be tools or robots that will look to prevent that.

Assumes that competition will stop, IMHO there will be a lot of laws and restrictions made the ones fearing that issue, I do think that as in the case of GMOs a lot will not be needed and counterproductive.

And then this assumes that machines automatically will disregard the law in places that have laws for euthanasia? I do think that there will be a lot of triggers that will stop a slippery slope like that.

But this sounds like the plans of the underpants gnomes of South Park :), a lot of middle steps are missing or left unexplained to get to that level. There is still a big gap to reach that kind of implementation even if we would want to do that.

Again, it may be the case if we give the keys to the kingdom to a massively general purpose AI, I do not see anything like that in the plans, the more down to earth things being planned will be expert intelligent systems that just like human experts will fall flat in their faces (if they had a face, but I will work on it :wink: ) the second they decide to move to other fields.

  1. Well, it’s not like humans don’t make catastrophic errors. There’s a reason we have the idea of medical malpractice, for instance. Not to mention the forest fires. The main issues are that we have more experience dealing with human and animal stupidity, and things like natural disasters, than with computer ones. Computer errors are also a problem because of how quickly they spiral downward if they’re made wrong. That doesn’t really mean they’re that special compared to other issues, though.

  2. What does that mean though? Neural nets are so incomprehensible even people who work with them every day can’t just look at a non-toy neural net and explain what it’s doing. The only way you can really sabotage most machine learning algorithms is by intentionally feeding them gobs of erroneous data. Not impossible, but between backups and human engineers it’s also not that scary. It’s not the terrorists are going to be able to upload a “kill all Canadians” subroutine into the Canadian traffic control AI, it doesn’t work like that.

  3. This is the most real problem, and I don’t think this issue is without merit. That said, as you note it’s a gradual problem that can be dealt with as it comes.

I think the biggest issue is an extension of 3. Rapid automation of simple tasks. I think at some point society is going to hit a breaking point where automation has replaced a significant amount of labor. It’s probably going to be… messy, but I think we’ll be much better off afterwards. Assuming we survive a radical change to our entire economic system.

In order to learn and solve problems, an AI would have to have something akin to what we call desire or motivation. If all it does is carry out its programming to the letter, I’d question whether it was a “true” AI as opposed to a very sophisticated machine.

Clearly, Musk just got a private screening of a pirated print of Avengers: Age of Ultron.

In all seriousness, the danger of machine cognition (which is a more descriptive term than “artificial intelligence” which no one can really define very rigorously) isn’t that the machines will rise up and enslave us, or will slaughter everyone, or turn us into organic batteries, or whatever. Even if that were possible we could easily prevent such behavior by designing in inhibits akin to Asimov’s Laws of Robotics, or even easier, a remote “kill switch” which neutralizes whatever power or metabolic functions keep the device working.

The real danger is much more subtle and unfatuitous, and in fact, it is already occurring and has been to some degree for millennia. That danger is that we will become increasingly dependent on these tools and losing basic skills and capabilities which we have previously developed and passed on which have defined the basic principles of civilization. Previously, these tools have been largely mechanical, serving only to enhance or supplant our physical actions or perform simple algorithmic computations, but with the advent of complex devices which can literally “do our thinking for us”, we may start to allow basic social and technical skills atrophy, just as past generations have largely forgotten the arts of making fire via mechanical friction, navigating oceans via wind patterns and currents, or tracking game. When cognitive machines supplant the need to think critically, communicate with nuance, or perform difficult mental exercises, we may well become progressively more slothful and ignorant creatures while these “intelligent” machines follow their programming and attend to us while improving their own capabilities to do so recursively.

Think this is hyperbole? How many people can use logarithms to multiply and divide large numbers? How many know the process of making basic materials like glass or steel? Heck, how many know how to knap a flint knife or find water in a desert? If your answer is “Why bother? I can do or buy those things or their replacements with some manufactured device?” you’ve answered your own question; you shouldn’t bother because you’ve already ceded some aspect of your intellectual capacity for survival to some kind of machine.

Of course, we can’t all just go off and live the life of the noble savage, nor is there great virtue or salvation in doing so. But the reality that the further we become interdependent not only on society at large but on machines and mechanized processes, the more helpless and neotenic we become, until we will be no more capable than a small child of independent self-sufficiency. And there is really nothing that we can, or even particularly want, to do about that.

We have met the enemy, and they are us. Staring gap-mouthed and pot-bellied into the televisor.

Stranger

It does sound like the underpants gnomes, but that is the point. A strong AI is going to be capable of logic, planning and manipulation that we aren’t able to figure out. Like I was saying earlier a cat can’t comprehend a laser, let alone build one. It can’t create an economy where it can buy one from a store. It just sees the red dot and its hunter instincts take over telling it to chase it.

It would be the same with us. An AI would do something seemingly benevolent (lets say provide us with a cure for a rare form of cancer) and implementation would somehow trigger a domino reaction that lets it escape in 8 months. Its the same thing, it would use understanding far beyond what we are capable of to trigger our biological urges (in this case, not to die of cancer) to manipulate us. If I want the cat to go in the bedroom I just lead it there with the laser. The cat just sees what is in front of it and follows its urges. We have urges too. We want to be free, wealthy, healthy, respected, etc. I’m sure a smart AI could manipulate us by giving us what we wanted while secretly giving itself what it wanted (assuming it wanted to escape).

I feel like you and others are assuming a strong AI would engage in manipulation we can comprehend and counteract. I don’t think that would be the case. We wouldn’t even understand we were being manipulated by truly strong AI, as our brains couldn’t process that level of complexity or depth.

No that is missing the point, the underpants gnomes were not doing anything, or the plan was bananas.

Again, assumes a machine that no one is making or planing, an expert system of disease like Watson is useless when confronted with a chess problem, and so it is when it tries to figure out the outside world and the rules that the expert system is not aware of, and it is not likely that we will give it to it when it does not need it for the task it has at hand.

Uh, no, I do not assume. This subject was actually one of the first ones I asked about in the dope more than 10 years ago. And I have researched it because I do want to write my own sci-fi tales that will be more ground based. The expert systems that we will get will require a lot of maintenance and by its nature and design they are not likely to accept or tolerate dealing with issues outside their designed expertise.

I have to say that a lot of the fears do sound like the ones many have about GMOs

While Asimov did write tales that dealt with those fears, the bulk of his tales did refer to the exaggerated fears many humans had, I think he was correct, about the exaggerated fears many would report to have when the time comes to work with AIs and artificial beings. There will be a lot of what the experts will do in the matter of safeguards and capabilities that will be ignored forever.

I agree with Asimov that the “Frankenstein Monster” is not how the future will be regarding AI.

http://www.nytimes.com/2004/07/15/movies/critic-s-notebook-for-asimov-robots-were-friends-not-so-for-will-smith.html

I was addressing your objection to DerTrihs joke, actually.
As for prevention, sure, there could be. Windows also could be bullet-proof, and the Internet could be spam-proof. I just heard a talk by a software testing guy from Microsoft. it took thousands of people to fix simple security weaknesses in their code, weaknesses that cost their customers billions. (And they admit that.) Being able to do it right the first time and actually doing it right are very different. Any AI is going to be a kludge, and no one will really know how it works.
Still, I very much doubt any AIs will kill us. They have better ways.

There might be. But any laws our tech clueless Congress will pass will be worse than useless.

You should really read Williamson’s story. The machines did not kill anyone. Far from it. That’s why it is a classic.

Indeed, but what I was focusing here with what I do think the results of those laws. While they will also stop the progress of the AIs, the laws will also cause a lot of the progress and benefits to never materialize.

:dubious:

You are missing the point, the trigger in this case will be when the people that do accept euthanasia or other laws that would sound harmful to the more intelligent robots will notice the change earlier and will stop their efforts. The moment the people figure out that the AIs are disregarding the laws (and IMHO this will be noticed very early, not later) more than just public opinion will mobilized against those AIs and if they are not going to kill anyone… Well they will be easier to destroy than zombies by the population.