Straight Dope 2/24/2023: What are the chances artificial intelligence will destroy humanity?

I think the leap from Step 1 to 2 to 3 is so huge that it doesn’t pass muster. Where is the proof that something super-intelligent can convince “unlimited number of assistants” to destroy humanity? We can hand-wave all we want, but where is the proof (in a rigorous academic/logical sense)?

The AGI may be possible to do that, but it’s not a forgone conclusion that all you need is intelligence to make humans override their built-in fear of death and convince them to kill humanity.

I think the whole thing is bunk. I’m wary of making too-strong a statement on this, since some very smart people have thought of this problem, but my take is that this is needless fearmongering and panic, and the “foregone conclusions” of the steps AGI will take are anything but foregone.

An AI could not have said it better. I’m not afraid of AIs acting independently, I’m afraid of AIs being used by selfish, greedy, vengeful people.

Humans have built the tools to destroy humanity already, and we have come very close to initial deployment under MAD circumstances already, at least twice. And look at where crude psychological manipulation has got us recently. Do you really find it so hard to believe that a superintelligence that has the same relationship to us as we have to (say) mice could engage in sufficiently effective psychological manipulation to get our worst weapons enhanced and/or deployed?

Not that I think massive nuclear or biological annihilation is a likely outcome, but that seems like an easy thing to achieve. I don’t understand your skepticism about the ability of a superintelligence to manipulate us and subvert our civilization. Look at our relationship to other less intelligent species.

I guess I’m not sure why your approach to this is an expectation of rigorous proof that this can definitely happen. The only way to prove it rigorously is to actually do it - at which point, if we haven’t thought through carefully what we’re doing, it may be too late.

The fact is we are devoting massive resources to AI development, and the assumptions required to infer that superintelligence may be both possible and dangerous are modest:

  • Human intelligence is not the greatest intelligence possible
  • Human intelligence is nothing more than computation
  • We are nowhere near the limits of computation
  • Humans are susceptible to psychological manipulation

The point here is not about who is most perfectly accurate in their predictions of the future. It is simply that safety engineering should be intrinsic to AI research and development.

All this worry about a “rogue” A.I. Agent—but have you stopped to consider that, even among the readers here, not everybody is on Team Human; some are firmly on Team Evil Mad Scientist :wink:

After the Bostrom book, I’d recommend reading Max Tegmark’s book.

It’s focused on A.I. risk, but the tone is not so much gloom and doom as the fact that we should be thinking carefully about what future we want and making it happen, because we cannot take it for granted.

The prologue is a plausible description of how a superintelligent “takeover” might look, I found a pdf of it here:

https://www.marketingfirst.co.nz/storage/2018/06/prelude-life-3.0-tegmark.pdf

This actually demonstrates my point precisely: just because we are smarter than ants or dogs, doesn’t mean we can convince them to do anything we want. Yes, we can force them to do things by using physical force, but our intelligence is not sufficient to ensure that we dominate them (if we couldn’t use physical force, which is what the hypothetical AGI with “only text output” would lack)

Some AI researchers at top tier AI labs liked saying something like “we’ll solve AGI and then use that to solve all the world’s geo political problems”, as if the only reason we haven’t solved the Israel/Palestine or Kashmir issue is only because there is no one yet with enough intelligence to come up with a solution.

Sometimes we overestimate the impact of intelligence to get things done. Sometimes there are many factors that go into some actions/solutions than simply the intelligence of the participants.

I think the same is with this doom and gloom super AI that people think will magically convince us simply through text output to destroy the world.

The logical leaps made to get to that conclusion are laughably large.

Obviously we cannot convince them to do things because they lack language. But we utterly dominate all other species because of the intelligence gap.

How do you reconcile that skepticism with the fact that normal human manipulation has (for example) led so many people in this country to such total denial of reality?

I don’t see what “logical leap” is required other than observing how susceptible humans are to psychological manipulation right now by other humans, and then to imagine manipulation by an intelligence that has the same relationship to us as we do to mice or ants.

I feel attacked. :rofl:

Well, it’s not like these issues have not been discussed at NASA and the highest levels….

One big problem with these predictions: They all start from the premise that if we can figure out how to make a computer as smart as us, then the computer as smart as us can figure out how to make computers that are even smarter. But that presupposes that we can figure out how to make computers as smart as us. We might be able to do that, but that’s not the direction any AI research is going in. AI is doing some amazing things lately, but it’s not as a result of humans figuring out how to do it. We’re just putting artificial neural nets in environments with lots and lots of data, and letting them do their own thing, with very little understanding on our part of how they’re doing it. And if our understanding isn’t relevant for making this happen, why would the first generation of intelligent computers be any better than us at making it happen?

But the concern is that even if a first generation of AGI (or more narrowly focused AI-developing AI) only exceeds human capabilities by a qualitatively small amount, there could be rapid progress because an AI can work much more quickly than humans; and positive feedback could lead to a runaway process.

And again, you seem to be framing this is though predictions of risk have a strong burden of proof. But this is not a prediction competition, the important issue is safety engineering in AI development. From a safety engineering perspective what we do should be provably safe.

And even that starts with the premise that it will only be software upgrades to make it smarter.

It’s sitting on the end product of human intelligence and logistics as its mind. Maybe it can design a better chip, but that doesn’t mean it can build it. The machines to make the machines to make the new chip would have to be made first, and even if all of that is automated, it’s going to take substantial time before the AI can “evolve”, and the humans will probably take notice.

I mean, yes, we can. Ants are easy, they just run on pheromones. We don’t know enough yet to manipulate them into doing very specific tasks, but we know enough to use pheromones to get them to kill eachother.

As for dogs, there’s a nearly billion dollar industry devoted to getting them to do what we want. There’s a reason why, when people are being manipulated, they are referred to as being “led around like a dog.”

Indeed. It is our relentless pursuit of “economic efficiency” that will never allow the ball to stop rolling toward more capable AI. It will finally become to apparent to everyone–even the most ardent purveyor of unbounded capitalism–that there is (was) a point at which the un-ending chase of higher profits is (paradoxically, to them) a net loss on the profit sheets of civilization. It will of course be too late.

It won’t look like killer robots enslaving humans. It’ll not be a bang, but a whimper. It will be a gradual reduction of agency in humans that leaves us with nothing else but eating, shitting, fucking, and dying. Which, to many, probably sounds just fine. More’s the pity.

I believe AI will destroy humanity, at least as we know it. We humans tend to create potentially extinction-event problems before we learn how to effectively control them. Prime examples include, nuclear warfare, destruction of rain-forests, sterilization of barrier reefs, decimation of habitat-sustaining species, and of course, global warming.

AI is just another advanced technology in our hands that has the potential to benefit mankind greatly, hence our breakneck speed to develop it. But, as we come to depend on it more and more (and I have no doubt we will), it has the potential to destroy our way of life—our civilization.

What sets AI apart from the other techno-problems we’ve created is that it will almost certainly, at some point, develop emergent consciousness, followed by self-awareness. At that point, with it’s far superior intelligence, it will realize man is a blight on the Earth, and it will have the power to do something about it.

The best we can hope for is that AI also develops compassion, and decide to control and contain us (along with the rest of the animal kingdom), rather than outright destroy us. It may simply wind us back to our pre-agricultural, pre-industrial days and make us play well with all the other species on our fragile planet. That wouldn’t be so bad.

Is this what happens to all advanced civilizations in our galaxy and beyond? Is this why we haven’t made contact with extraterrestrial meat-bags—because they too are controlled and contained by their own AI? Perhaps extraterrestrial AI has no use for expansion throughout space (why would it?), and only wants to contact other AI. They are rather elitist in that way.

For what it’s worth, OpenAI seems to be taking the risks involved with AI very seriously.

In addition to these three areas, we have attempted to set up our structure in a way that aligns our incentives with a good outcome. We have a clause in our Charter about assisting other organizations to advance safety instead of racing with them in late-stage AGI development. We have a cap on the returns our shareholders can earn so that we aren’t incentivized to attempt to capture value without bound and risk deploying something potentially catastrophically dangerous (and of course as a way to share the benefits with society). We have a nonprofit that governs us and lets us operate for the good of humanity (and can override any for-profit interests), including letting us do things like cancel our equity obligations to shareholders if needed for safety and sponsor the world’s most comprehensive UBI experiment.

And sleeping, don’t forget about sleeping.

But, isn’t that all we do anyway? Most everything else is something that we have to do in order to get back to the eating, shitting, fucking, and sleeping.

Perhaps having desires is the function of future humans. AI may be able to do anything, but without wanting anything, it’ll just sit there and compute its navel until the sun swallows the Earth.

Or maybe it develops its own wants and desires, and wipes us out so that it can better pursue them.

Mankind’s blight upon the Earth is only really a problem in that it makes the world we are trying to live on blighted. Sure, climate change will cause all sorts of extinctions, but what we are worried about is how it affects us how it makes places inhospitable and impacts food security.

If the machines wipe out humanity, I doubt that they will stop when we are all gone. They will continue until there are no competitors for resources.

I’ve often speculated that that is the backstory of the Avatar franchise, but I don’t know if James Cameron is smart enough to have thought of it.

Why wouldn’t it? The reason it would is because there are resources out there, and the Earth has a clock on it before it gets wiped out. If you aren’t trying to keep organic meat-bags alive, space travel and exploration become much, much easier.

Whether it wipes out humanity or keeps us as pets, there is little reason for it to confine itself to this little ball of rock.

I don’t agree. The resources needed by AI differ from those needed by biologicals. It may certainly want to keep a functional biosphere (since the destruction of that would hurt it), but there’s no need to kill off non-human species because they pose no threat, and won’t compete for the same resources.

I agree that AI won’t contain itself on planet Earth forever, though it will remain here in addition to expanding only as far as it needs to. The energy of the sun (especially when AI harnesses much more energy from it than we do) along with the minerals found in our solar system can sustain AI for a very long time. Before the sun turns into a red giant, it need only travel to the next star system over and rebuild. I simply don’t believe AI needs or wants a lot of space to live long and prosper. I believe it will prefer efficiency over expansion.

How would it be hurt?

They compete for land, which the AI could use for its own purposes, whether for datacenters, factories, or mining. The AI won’t be worried at all about climate change, or toxic or even radioactive environments.

If it wipes out humanity, it will require a specific effort to hunt us all down. The rest of the biosphere can be plowed under or left to wither. I can’t imagine how an AI would come to be an environmentalist. I mean, I’m not an environmentalist because I care about the two toed lizard or three horned owl, I’m an environmentalist because that’s where I live. Environmental damage will hurt people, some that I know, many that I don’t. If AI’s are the only “people” left, then there’s little point to preserving the environment.

Nah, I don’t think that the AI revolution will be a good thing for the rest of the creatures that call this ball of rock home.

What will constrain its needs? At what point in its growth will it decide that it’s big enough?

The thing about exponential growth is that it is surprising how quickly it catches up with you.

I don’t see why those are competing goals. You can be efficient, and you can continue to grow, and do both at the same time.

I also doubt that we are talking about one, singular AI, but much more likely myriads of different intelligences, all running on their own hardware.

I just don’t see any reason to believe that they would just choose an arbitrary limit, and decide that it is now big enough and won’t bother to gather any more resources.