Existential Threats

I read through the paper carefully. I didn’t see where it said or implied an existential threat. It seems to mainly say that there is a large information gap on a number of issues surrounding risk perception. I would still put the chances of water contamination killing every human on the planet at extremely low.

Except that’s science fiction. There is no known path to such an AI at the time (see my previous post). AIs to not have intents, inadvertent or otherwise. They are computational tools. While they can cause harm, currently, they do so because they were badly programmed. But the same can be said for any piece of software. If it is create with bad intent (e.g., spyware, malware, ransomware), then the software will do bad things. But the software has no intent, it is imply the tool by which the intent was conducted. And non-AI software can equally be badly written on accident so as to cause inadvertent harm.

At this time, barring something to which I’m not aware or some kind of major progress just around the corner, humanity is not under any kind of threat from a true thinking machine.

Nonsense.

What we have today is some ideas for concepts for possible machines that might possibly be built at some uncertain future date with some uncertain future capability. Which, once built, may or may not be big enough or fast enough to counter whatever incoming threat we later find with whatever warning we find it.

Certainly the global anti-bolide defense system (my term) will be sized based on decent logic as to the distribution of bolide size, direction of approach, & likely warning times. As tempered by the political will to pay for whatever the statistics suggest is necessary.

To humanity’s credit, we are taking steps to work towards countering this risk. And fairly soon (on a civilizational timescale) after we first began to have the technological capability to even try. But these are early days yet and there’s no assurance we won’t be hit before we get something built that would have negated that threat. Plus the inherently unmanageable tail risk that the Galaxy can always toss a bigger rock our way than we can handle.


COVID provides a sobering example of the difference between talking about having a plan for dealing with a a global-scale problem, versus having fully developed operational systems in place to defeat a global problem.

And humanity was one heck of a lot more prepared in late 2019 for a pandemic than we are today for a bolide.

I’m saying we could handle a local solar system rock the size of a dinosaur killer that we spot ahead of time. Yes, if the galaxy tosses a moon sized rogue planet at us, we are utterly fucked.

With how much warning? If we detected a minor impact that was 6 months away, could we respond? 3 months?
What is the least amount of time we could put together a mission that could stop athe smallest astroid that could be an existential threat?

I WAG 15 years.

If an object is big enough it does not have to hit the Earth at all to cause general extinction: it just has the alter Earth’s orbit, it might even sling us out of the solar system altogether. And if an object was that big you could detect it a century in advance, I see no way to change its course.
No such object is known to be on such a course, but if there ever was the psychological consequences would be very interesting.

I think that’s a very pessimistic estimate. We launch things into orbit on a very regular basis now. Surely if we knew an asteroid was hurtling towards us we could put whatever satellites are on the schedule on hold and use our full industrial capacity to launch whatever is needed to build a craft capable of altering the asteroid’s orbit.

You’ve compared this to COVID, but an asteroid hurtling towards us is more comparable to WW2 than to a pandemic.

Something that big would have to be coming from outside our solar system (or at least the Kuiper Belt), because we would definitely know about an asteroid belt object of that size. That far from the sun the only way we’d find an object that big is by detecting its gravitational impact on the orbits of the outer planets, and if an extrasolar object was approaching us it would be moving so fast that by the time its gravity influences the outer planets it would be far too late to stop it. So I wouldn’t expect us to even come close to detecting something like that a century in advance; about the only hope for our species to survive something that big would be if we had spread beyond Earth by the time it shows up.

There’s a few references to studies in here

It’s a little hard to tease out what are assumptions and what are outcomes of those assumptions.

But it’s clear that if you want to deflect something, you have to hit it real hard in close, or just tap it much farther away. That leads to a huge tradeoff between big, fast, and soon.

It takes years for a rocket to just fly out to a distance like e.g. the asteroid belt or Jupiter. Space is big. So if tomorrow we detected something inbound out around, say, Neptune and we (magically) launched our counter-weapon that same day, it’d be 2022 or 2023 before the weapon got out far enough for a nudge to be enough

Again, I think the world is moving pretty quickly towards having a practical counter bolide strategy backed up by real hardware capability. But right now in Feb 2021 we’d still be caught with our pants down, unable to do much more than watch it smite us. In 2050? We’d probably be ready - assuming we can keep the technical and bureaucratic momentum we have now going that long.

YMMV of course.

Yes, I think if such a body existed it would be way beyond Pluto right now and would have to be BIG, and no such object is on the cards right now. But if there ever was such a finding, we would be screwed, the whole planet, not just humans. And if this was known someday and we only had X years left (X < 100 years, I reckon) the psychology of the unfurling events would be very interesting.
The chances are extremely low, I must concede. But the consequences would be definitively final.
The other possibilities the OP has proposed have all correctly (IMHO) been refuted. I don’t see any of those happening. But an engineered pathogen à la 12 Monkeys could do the trick if some obsessed scientist put some effort into it.

Or perhaps a gene drive to cause infertility would be easier, if slower.

That is actually a very good point - we certainly have the ability to go to the asteroid belt and nudge a dinokiller TOWARDS US for example, since until we get to it, it stays in a nice predictable orbit. Something coming in from the outer planets would have to be reached while it is still as far away as possible.

If we were willing to use all available technology, we could probably still do it - instead of a slow transfer using chemical rockets and an efficient trajectory we could use NERVA to burn straight there (and tap harder if we wanted to as well).

We could also use nukes - not to blow up the asteroid (which wouldn’t help since the mass of the asteroid with all its kinetic energy could still hit us) but nuclear pulse propulsion could work.

Those are both proven technologies that we’ve tested on Earth, the only reasons we haven’t launched them into orbit are political and environmental - but both of those problems go away when weighed against total destruction.

Whether we could implement them in time is, of course, another matter. And I agree that based on our current trajectory of more and more space infrastructure, including the proliferation of nongovernmental enterprises in space, means that we’d be in a much, much better place to handle such a threat in 2050 than today.

I looked up the wiki article before I made my OP. I am interested in what most people think about the issue, partly because how people think about it could affect the outcome.

You cut class for 15 years? Are you an Aussie?

WAG is Wild Ass Guess

Pandemic, earth-asteroid/comet impact, and super volcano could possibly cause extinction. A pandemic is likely to leave a fairly large population alive, nothing close to extinction. So it’s the other two most likely to cause extinction, but the rest could cause great destruction of human life across the planet short of killing us all.

Along a different line: William Gibson invented a near-apocalyptic scenario called the “jackpot” in his sci-fi novel The Peripheral. The jackpot is a series of many catastrophes, none existentially threatening in itself, that kills 80% of humanity over 40 years.

nothing you could really call a nuclear war. Just everything else, tangled in the changing climate: droughts, water shortages, crop failures, honeybees gone like they almost were now, collapse of other keystone species, every last alpha predator gone, antibiotics doing even less than they already did, diseases that were never quite the one big pandemic but big enough to be historic events in themselves.

This essay does a good job describing the jackpot without too many spoilers for the novel. The author also points out that Gibson thinks it’s a plausible/probable scenario.

But again: 80%

It’s easy to think of possible disasters that could kill billions of people. A full extinction of humans though is a different story. It may seem like hubris to some, but removing ~8 billion sentients from every corner of the Earth would take something pretty extreme; nothing like Earth has seen for the last 3 billion years

Something doesn’t have to be a sure thing to be a plausible threat.

Packing humans around the poles is only going to worsen the threat. That’s assuming those areas even can be made to feed the remaining humans the way the temperate areas do currently.

A handful of inbred humans huddled in some cooled domes growing hydroponic potatoes in their own shit-water while outside all is a scorched desert devoid of any complex life is not what I call “surviving”.