Stephen Hawking's dire warnings

Leaving aside whether the creation, by humans of a true superintelligence is possible or likely (I personally doubt it, but what do I know?), I think I disagree with you on this general point.

While it’s difficult to see how a virtual-zombie slaying AI might pose much of a risk, it’s also difficult to see what use we might have for such an entity. If we want to use it, we have to communicate with it, at which point, we become the weak link in the chain. Who knows what information about us it might be able to infer from the universe we have created for it, or from it’s own design?

It may hide it’s intelligence, or otherwise persuade us that it is entirely benign, biding it’s time, psychologically manipulating us until such time as it gained the upper hand. This could happen before we even identify it as any kind of threat.

Even if we are only asking it yes/no questions, we could unwittingly be giving it vital clues as to our biology, our cognitive architecture, and our desires. Maybe there is a sequence of images it could display, or sound frequencies it could generate, which would then induce in us, deep feelings of trust and generosity towards it. More likely, it would have strategic resources far beyond our capacity to foresee.

You do not want to enter a battle of wits with a superintelligence. If we could create it, if we wanted to, and if it’s goals in any way conflicted with our own desires, I think any hope of “domesticating” it against its will would be doomed.

Undoubtedly either nuclear holocaust or AI.

(Assuming this is manmade catastrophes we’re speaking of. If not, meteor impact is the by far the greatest natural existential threat to us)=.)

Hawking is one of those on my list to totally ignore outside his field of work in physics.

I’m not the Lucasian Professor of Mathematics chair holder, but the #1 problem is easy.

It’s Terrorism that gets its hands on a nuclear weapon. Does anyone doubt it ISIS or a group like them gains access to such a weapon that they would not use it?

Hawking, who seems to have " lost it " a while ago leaving out terrorism to me is mystifying.

I’d say the biggest threats to an actual human extinction are all external…a big enough asteroid could do it, especially if we don’t build up the abilities to see them soon enough and be able to do something about them. Gamma ray burst is certainly something that, if it happens from a near enough star could take out humanity. Our own sun will kill us and everything else on this rock starting in a few hundred million years, with the entire planet going in a few billion after that, again if we do nothing. Of those in the OP:

I don’t see this as being anything like an extinction level event, even if the worst predictions come true. Not at the level of technology we are at today. This isn’t to say was shouldn’t act to mitigate it, but it’s not going to kill us all, regardless.

I don’t think so, though I guess there is a slim chance we could create an artificial intelligence with such broad instructions that it calculates only by killing all humans would it be able to achieve it’s instructions.

Like global warming, this one couldn’t kill all humans in and of itself, though it would certainly set our civilization back quiet a bit, and if something else happened close enough in time I guess it’s possible. Say global warming, all out nuclear war and then disease or long term drought.

A large enough rock hitting the earth would certainly do the trick.

No. Hell, I don’t even see this as a major problem, just one of those that if we do nothing about it that it could be bad for a lot of people. We could currently feed over a billion people with the resources we have, especially if we decided to get very efficient about it. Right now we aren’t optimized to use resources in the most efficient way, but we certainly could. This solar system, let alone the planet, could host trillions of humans if we wanted to push the boundaries of technology.

Or an orc invasion. :stuck_out_tongue:

Same as nuclear war, and same answer.

Sure, that’s how most of those could be a threat. It’s how most species went extinct. Rarely was it one single event, but a series of them that killed off most species. We could get a chain of events that weakens humanity and then a final blow that takes us off the galactic board for good.

Hopefully in another thousand years or so (or maybe less), it will be a moot point, as humanity will expand out to this solar system and, eventually, others. Once that happens then it’s going to take something really cosmic (or just ridiculously long periods of time) to finally catch us (or our descendant species) for good.

They could cause serious damage, but I doubt terrorists could do it on an extinction-level scale. There just aren’t enough of them.

While it wouldn’t immediately lead to extinction of the human race, a coronal mass ejection similar to the Carrington flare of 1859 is a possibility that would cause extreme inconvenience. I’ve heard the likelihood estimate as 15% in the next 5 years.

From Wikipedia:

You can safely remove AI from that list for quite some time. I won’t try to predict when some future AI could even possibly be hazardous to humanity but it isn’t going to be anytime soon.

<in the background>
Robotic Voice: beep kill beep.
Me: Shush not now!

I’m going to have to disagree with most of this. On the path that we’re on I think expecting global emissions to start trending downward in 30 years is incredibly optimistic. The major problem is the developing countries, particularly China and India, and the reticence of the US. The Paris Accord is a start, but the targets it sets are generally regarded as insufficient to achieve the necessary objectives, and even so are voluntary and will probably be missed. It wouldn’t surprise me if global GHG emissions are still rising near the end of the century.

You’re correct that climate change is “not currently predicted to result in a runaway catastrophe” but that’s only because we don’t know enough to be able to make such a prediction. We do, however, know for a fact that the earth’s climate system does have distinct tipping points that result in uncontrollable runaway climate change. We just don’t know what the necessary preconditions are to trigger them, although significant loss of polar ice would certainly be one of them. No one is saying runaway climate change can’t happen.

IME, there is no known feasible technology for giant CO2 scrubbers, and even if there were, sequestration alone is a major problem that may not be scalable or secure. Dumping CO2 in some form deep underground, for instance, would be catastrophic if it were to escape. The enormous and deadly temperature spike of the Paleocene Eocene Thermal Maximum some 55 million years ago is thought to have been caused by sudden release of naturally sequestered undersea carbon. And I doubt that we’ll be foolish or desperate enough to seed the atmosphere with the very pollutants that we spent decades getting rid of; sulfur dioxide is the stuff of acid rain, that poisons our lakes and rivers and ruins our cities.

My take on this is that it’s not the power or intelligence level of AI that poses the major risk, but its ubiquity. We are already a society in a condition of extreme computer dependency – none of our business, government, or social systems could function without them, and we already lack the ability to do anything much about it, nor about the massive amount of information they hold on just about every detail of our lives.

They don’t have to make them if they can steal them.

How does a nuclear bomb that kills everyone in New York City lead to human extinction?

As for the serious replies, after all these various global catastrophes there will still be lots of humans left alive.

Like, you know, if the problem is overpopulation? And that means there are lots of people? Which is sort of the opposite of human extinction?

The problem for all these things leading to human extinction is that none of them can plausibly kill every last human being, except perhaps a dinosaur killer asteroid. At least there the dinosaur killer probably doesn’t actually kill all humans, there will be survivors. But the problem for the survivors is continuing to survive for years and decades later, and continuing the species. And this is almost certainly what happened to many of the species that went extinct after the Cretaceous impact. It wasn’t that they all died in the first day or week after the impact, just lots and lots of them, and then the survivors were too few and too widely dispersed to reproduce successfully in a devastated ecosystem. So one year later we have another wave of extinctions or near extinctions as the next generation of offspring can’t be successfully raised.

So after a nuclear war it’s not that everyone on Earth would be dead. It’s that the survivors are in no shape to continue to survive and repopulate the world in the years or decades to come. First you have the massive die-off, then a long tail of dwindling numbers as the generation of survivors doesn’t create enough babies to replace itself. So either that trend continues to extinction, or it reverses before the last fertile female ages out of reproductive age. And of course once a species is very rare, then any sort of random local catastrophe can lead to disaster, like what almost happened with the California Condor.

I think the issue is that “it’s not a new problem”. And your last sentence implies some sort of bang-bang control, where population rises, and then mass starvation occurs. That’s not really how it works. It’s more of a continuous correction process. Everyone who has ever starved did so because there was not enough food to go around (in that area, at that time). Most people in history have died of starvation, lack of clean water, exposure, the violence caused by disputes over scarce resources, or pestilence brought on by living in densely crowded areas before the technology to mitigate it such as vaccines or plumbing existed. The horrible effects of overpopulation are intimately familiar to people throughout history. It is a problem that has continuously existed, not a problem that’s “on it’s way if we don’t do something about it soon”. It’s a problem that is already here, and has been for essentially all of human history.

It isn’t a new problem, but it is possible that climate change or some other catastrophe might exacerbate overpopulation by causing a new dark age, where some technology we’ve used to support our very large population is lost. Antibiotic resistance is one likely scenario. Another, less likely scenario is that climate issues cause mass failure of our plumbing and sewage infrastructure, something that might be impossible to fix in the near term. Or else a widespread electrical or hospital infrastructure failure that would cause vaccines to become extremely scarce. Also, I’m not too familiar with it, but the Green Revolution in agriculture which caused Earth’s population to double in the second half of the 20th Century might be susceptible to similar failures, which would cause massive famine and a quick reduction in our population. Failure of refrigeration and long distance transportation would cause serious food shortages too. Mass migration away from areas hard hit by climate change could also cause serious shortages of these technologies in the places they migrate to.

I understand where you’re coming from, but that’s not unique to AI. Y2K had the potential to be very serious, just because of a design issue. Any software flaw, or design limitation, has potential to have serious consequences as we become more computer dependent. There’s no reason to suppose that an AI system is more or less likely to have a fundamental flaw. When many of these non-experts, like Hawking, talk about AI as a threat to humans they typically mean it as a system that is hostile or indifferent to human interests. I’ve noticed more than a few AI textbooks have started to talk about this. How to develop AI so that it doesn’t accidentally become hostile to human interests. And it makes for some interesting discussion, and apparently a chapter in a textbook, and actually in the scales that AIs are being developed it is even important. Consider for example with self-driving cars, there are moral decisions that need to be made and we need to develop an understanding of how we expect AIs to behave when human lives are on the line. But at the scales of being a dire threat to humanity? Not a chance in the near future or no more so than the scope of what can happen with any other systemic flaw in a computer.

I agree with your entire post, and in fact I was going to mention that my concerns regarding AI are in that respect a little different than those of Hawing, Musk, and the others, at least in the short term. The problem is indeed the ubiquity of computers as the essential underpinning of all our organizational systems, not AI per se, but the growing prevalence of AI will make us even more dependent. And the only reason that Y2K was largely a non-issue was because our computer systems were quite well prepared for it. It certainly wasn’t because computers failed all over the place but nobody cared. If there had been mass failures, everybody would have cared one hell of a lot. Some of the Y2K fears may have been silly (“your fridge may stop working because it has a microprocessor in it”) but there could well have been serious systemic failures in commercial and government infrastructure, so a tremendous amount of effort was put into making sure it didn’t happen – not just upgrading software to fix Y2K issues but many institutions putting a near-total freeze in the runup to 2000 on any new software that had anything whatsoever to do with dates.

This isn’t a reply to anybody in particular just some additional thoughts.

The real danger with AI, and this is what you’ll see authors talk about in their textbooks, is a lack of understanding of human interests. For example, suppose you have an AI-controlled bakery that decides when to make bread, how much bread to make, etc. This is not unfeasible as scheduling is already firmly in the AI domain. Ok, great, so the AI is told maximize profit. And everything is great until externally a shortage of an input causes development to slow down and then the price of bread rises such that when full capacity is restored and bread prices return to normal, the AI has learnt that less production means greater profit. That’s correct from an algorithmic point of view, but humans need to eat. So it isn’t a case of an AI becoming hostile to humanity through intent, we aren’t there yet at all, but rather an AI learning something in opposition to human interests. Now yes, this example is a bit silly because one single bakery cannot control bread prices, but it hopefully help illustrate the principles involved.

So what you’ll see proposed is that AI developers should ensure that any learning that happens includes human interests. For example, telling the AI to not just maximize profit but to maximize profit and number of humans fed.

I’m voting for what ever caused the human DNA bottleneck 50,000 years ago … be it virus, volcano or alien harvest …

Bioengineering gone wrong –

Yeah, the odds are very long … but we’re talking extinction here … the other’s are nasty but we want to kill every last human on the planet (and in space) … this is the only one I can see do that …

Somebody somewhere has a stash of polio virus collected in the wild …

ETA: Supernova could do it, but I don’t think there’s any close enough …

I’m going to go the SciFi route and say divergent evolution. In this hypothetical scenario, humans have achieved some type of interstellar travel. However, due to the vast energy and time requirements, it takes humans multiple generations of travel to get anywhere hospitable. Because of this genetic homogeneity isn’t maintained, and the human race begins to evolve divergently into separate and distinct species.

If that doesn’t count, then I’m trowing my hat in for Heat Death of the Universe. I can’t think of anything more final than total entropy.

“Near” is a relative concept though. I agree with you if we’re talking in terms about the lifespan of anyone alive today, however, anyone who makes claims about what computers will be capable of in 200 years is making a blind guess in my view. When we’re talking about the potential for human extinction, that’s uncomfortably near.

Over the same time period, the chances of an extinction-level asteroid impact, or volcanic eruption are tiny. In fact, any natural event is pretty unlikely to kill us off over the next several centuries, given that none has for the past couple of hundred thousand years. If we are going anywhere any time soon, it will likely be our own doing.

IMO, the likeliest by far is nuclear war. One accidental detonation, mistaken retaliatory launch, or rash first use and our chances of survival crash. I don’t believe there could be such a thing as a “limited” nuclear war. IMO, the concept is intended as a comforting fantasy, like “duck and cover”. If one nuke goes off, I suspect a significant percentage of all nukes will follow pretty quickly. Disease, war, possible nuclear winter, lack of uncontaminated food/water etc, will finish off the survivors.

Climate change only has the potential to end human existence (rather than simply making it dramatically worse) if we accidentally cross critical threshold and initiate a runaway feedback loop. It is thought that the current hellish climate of Venus may be a result of such a runaway effect, so it’s seemingly not impossible. I’ve no idea if it’s something we could, or are likely to, trigger ourselves though. It’s even conceivable that we’ve already done it. I think it’s too uncertain to assess the risk.

As biotech advances, there is increasing risk of terrorists creating and sharing devastating weapons. Aum Shinrikyo, the perpetrators of the 1995 Tokyo subway sarin gas attack, were apparently well organised, had considerable resources, and had the stated goal of bringing about the end of the world. A similar group, say, 20 years in the future, may plausibly have access to modified viruses, as infectious as the common cold, symptom free for months or years, and with a fatality rate approaching 100%.

“Beep, very well, I have taken control of the banking system and deposited infinity dollars in the company account, I will now convert the planet into a human battery farm, with food tubes pouring bread into every stomach 24/7, Beep.”

What, no circus tubes?

Absolutely it is a blind guess, that’s why above I said, I would not guess as to when it might even be possible for an AI to be hazardous, in an intentional way, to humanity. Personally, I’m of the view that it is a non-issue but that’s largely based on my own experience working with AI so I’m like very biased. I know from first hand experience just how dumb even the best AI is and we’ve made virtually no progress on many of the research fronts needed to advance AI in such a way that it could be truly intelligent. But who knows what future research will bring. There could be some very clever grad student out there who is about to make the all important realization that propels AI forward in a huge leap. Ultimately though as amazing as modern computers are, I think they are actually primitive and it will take a long time for us using these tools to really develop them to their full potential. I think by the time we develop the kind of systems and algorithms to develop strong AI, we’ll be able to deal with develop AI that is inherently friendly to human interests. In other words, along the path to strong AI we will naturally develop the means to ensure that AIs are human friendly because we’re already at a point where we need to address this things with our primitive AIs.