Not that Musk is not a mind worth listening to, but this makes even less sense to me than that other great mind Stephen Hawking’s warning that if we ever do contact ETs they’ll treat us like the Euros treated the American Indians or the Africans. (Who wants slaves who can’t even breathe the same air as you?! Who wants to colonize a planet that grows nothing you can eat?!) Look, we’re talking about electric-powered machinery here. If the thing goes Skynet, well, an off switch is easy to include in the design. In fact, you needn’t even turn it off, just cut the linkages to any other machinery the AI controls. Is there something I’m missing here?
Well, the key word in AI is intelligence. Presumably, once the demon is out of its cage, the intellect can determine ways to fuck up our shit. He’s not saying AI is bad, but that if not properly controlled it could cause major problems for us.
There’s also the saying that no creation can be smarter than its creator, and thus whatever AI concocts we’ll be able to stop it. That’s a bit generous, given humanity’s ample ability to act stupidly. Better safe than sorry methinks.
No, I think you have the gist more or less, as a guy who has experience with AI, And neurology the best mind to ask for this is IMHO is Jeff Hawkins, and he mostly disagrees, of course one can not forget the military applications, but even there it is not likely that the military would be willing to create what would be literally a loose cannon.
And AFAIK his work leads him to think that we will indeed get intelligent machines, but you are correct about the safeguards and limitations that they will get; So, no evil robots (At least on the approach that he is doing and making products), he even dismissed the Skynet or terminator fears, it is only if we allow replication technology in combination with that where we should worry.
As I pointed a few years back in a similar discussion, many are expecting the terminator when we are going to get Marvin.
“Here I am, brain the size of a planet and they ask me to take you down to the bridge. Call that job satisfaction? ‘Cos I don’t.’”
I saw that earlier this morning. Both ET and AI could be so alien it’s impossible to predict what their reasonings and motivations might be. ET isn’t going to steal our women or water or anything, but to an ET, humanity could be nothing more than an unsightly fungal infection on the verge of spreading, and treated about the same. AI could end up just as alien, but I’m not too worried about that either.
AI control of a specific system encounters some situation that it wasn’t prepared for and produces a non-nonsensical solution. An example of this would be the computer trading that led to the various flash crashes of the stock market.
Some terrorist hacker gets control of an AI that is controlling a significant portion of infrastructure and wrecks havoc.
Over a very long period of time we become increasingly complacent and dependent on AI such that our culture and value system suffers.
For the first two, the main thing that we should avoid is becoming too dependent on a single AI, such that if that AI develops a glitch we’re screwed. But that would seem to involve more effort in regulating the use of AI’s in individual applications, rather than regulating AI’s as a whole.
For the third, that is a more philosophical question, but one which will occur gradually such that regulations can develop over time as needed.
This sounds like more or less boilerplate Singularity thought - the idea being that we’ll create an artificial intelligence which is better at designing AI than a human is, and that in a comparatively short amount of time they’ll recursively design more and more intelligent AIs, until you have a godlike AI. And at that point if it wants to eliminate humanity to create additional computational substrate, then we’re all boned.
This scenario or something like it appears to be a serious concern among a reasonably large subset of intelligent people, and I guess it’s not immediately dismissible. Personally I have my doubts, and to the extent that it’s an existential threat I figure there are probably more pressing concerns anyway.
Existential threats are scary. Almost as much as Confucian threats.
Assuming we have nothing to offer aliens technologically, it’s better to be safe and just wipe us out a la The Killing Star. Unless they have alien hippies who want to study and preserve our culture.
AI will wipe out humans in the long term, though maybe not in the way some think. It will be a pseudo-voluntary extinction. Or maybe more of a transition. Once you have AI there’s not much point in organic brains anymore. Cyborgs will be the last bastion of dirty meat minds before the glorious metal revolution.
The earlier primates we evolved from would disagree with this, so would all the genius kids born to average parents. There is an argument, I don’t know what it is called, that technology is just another stepping stone in evolution. A species eventually has all the biological tools it needs to create technology and then it does because technology provides a new level of security and survival for that species. Eventually the technology will surpass the species that created it because that point will occur before the species has solved all their biological problems (ie we will have strong AI before we have solved all our social, personal and medical problems)
What you are missing is that a truly intelligent machine that is many orders of magnitude more intelligent than us would be able to outsmart us and escape sooner or later.
Maybe the machine will figure out some way to disable the switch that we aren’t smart enough to predict. Or they will find some way to manipulate us into doing its bidding without us knowing it. The idea that we can outsmart and contain strong AI isn’t something I see evidence for.
Remember the scene in the first Iron man where the terrorists demand Tony Stark build a missile, so he uses the tools they give him to build a suit to kill the terrorists? A machine hundreds of times smarter than Tony Stark would have an easy time tricking, manipulating, overpowering, intimidating and scamming us into escaping from any of our controls sooner or later. Possibly using strategies and tactics we can’t even fathom or contemplate with our brains designed to help us survive in the savannahs of Africa.
This seems likely to me. And…it doesn’t worry me much. We keep Okapis in zoos; the AI civilization will probably keep us in zoos. They’ll be wise enough to know that extinction is stupid and wasteful. Plus, we’re cute, and little AI kids love to come and throw peanuts to us.
I don’t plan to start worrying about this until machines can experience pleasure. We can’t even get them to pick something at random, let alone an action motivated by desire.
Back in 2004self driving cars were so crappy they all broke down or had accidents within 5 miles. Nowthey are so advanced they are safer than human drivers. Tech evolution occurs rapidly.
Or even if we do know it. I recall a line from a Norman Spinrad story set in a future society that’s more and more being dominated by AIs; “the side that gives the machines the most freedom always wins.” If societies that give their AIs the most freedom to act, if those who follow their AI’s suggestions reliably outcompete those that don’t, the AIs will gain freedom and power without even needing to trick us into it.
An AI in an isolated box with no way to hook up to the internet or affect the world physically probably isn’t much danger, true; it’s also not much use. You can’t really make it very useful without giving it the freedom to be potentially dangerous.
Well, the tale you used as an example is actually the perfect example of a Marvin. I realized awhile ago that the follow up to that computer boasting would be that there will be no fear as there is no mechanism to have the device do that lightening trick, besides you forget that there will be plugs to disconnect too. So the reply after "“Yes, now there is a God.” would be:
Technicians: “That is nice, now compute the most likely cure of this disease and also order us a pizza.”
As for plugs, that’s what maintenance & construction robots are for. I recall an old novel named The Two Faces Of Tomorrow where they tried that “single power source we can shut off” bit with an AI. Even before it went out of control it had realized that it was vulnerable to power loss, and so built a redundant of alternate power lines as a simple precaution.
And while an AI in a box couldn’t do that, it couldn’t do much of anything useful anyway.
T-800: The Skynet Funding Bill is passed. The system goes on-line August 4th, 1997. Human decisions are removed from strategic defense. Skynet begins to learn at a geometric rate. It becomes self-aware at 2:14 a.m. Eastern time, August 29th. In a panic, they try to pull the plug. Sarah Connor: Skynet fights back. T-800: Yes. It launches its missiles against the targets in Russia. John Connor: Why attack Russia? Aren’t they our friends now? T-800: Because Skynet knows the Russian counter-attack will eliminate its enemies over here.
Actually, his Hyperloop silliness convinced me that Musk himself doesn’t know much about engineering, although he certainly knows how to hire good engineers.
Elon Musk is a really interesting guy, and I’m sure he’s no dummy, but I’m not sure what prompted his comments. It’s not at all clear that we’re particularly close to any real artificial intelligence (in the HAL 9000 / Skynet / Matrix sense). I mean, computers may malfunction, and computer malfunctions could conceivable kill people–so can earthen dams, of course–but true AI seems like one of those things that’s still in the same category as flying cars or nuclear fusion (yeah, I know about Lockheed’s recent announcements). Whether or not strong AI leads to Asimov’s nice robots and a golden age for humanity, or the Terminator and armaggedon, or some unimaginable “Singularity”, it still seems like it may be a long way off.
Musk’s scenario is basically HAL 9000 writ large: we’ll design an AI to do something or other, and it will decide that the solution is something that isn’t in our interest. HAL kills the crew instead of lying to them because, presumably, it doesn’t have the human instinct that killing a bunch of people is worse than mildly violating an order.
So maybe we’ll build a spam bot that decides the best way to eliminate spam is to fire all our ICBMs at Russia (one has to admit that it would be effective…).
But it all seems very unlikely to me. Such an AI is only a danger if it is powerful enough. And it could only get powerful enough if we give it enough power in the first place or if we give it the ability for exponential self-growth. Neither seem very likely for typical uses of AI (think of the liability issues for a “learning” self-driving automobile).
Perhaps one day we’ll build an AI that can build other, lesser AIs (for various consumer tasks) that have provable characteristics. That’s more dangerous but still doesn’t necessarily have the ability for self-growth. A single human can’t really understand how his own brain works (in great detail); why should a machine intelligence be different?