Straight Dope 2/24/2023: What are the chances artificial intelligence will destroy humanity?

What are the chances artificial intelligence will destroy humanity?

What, you thought the worst that could happen was a search engine declares its love for you? Then you haven’t been following the AI news very closely – specifically, the publication last year of a scholarly paper entitled “Advanced artificial agents intervene in the provision of reward.”

OK, that’s not as grabby as it might be. Lead author Michael Cohen punched it up some in a tweet: “Our conclusion is much stronger than that of any previous publication – an existential catastrophe is not just possible, but likely.” For those still not getting it, journalists covering the story dispelled all doubt: “Google and Oxford scientists claim AI might destroy mankind in future.” (Cohen is a doctoral candidate at the University of Oxford, a world center of what I’ll call scary-AI thought, and one of his co-authors is with DeepMind, Google’s AI research arm.)

This isn’t the first time claims of potentially homicidal machines have been made – they’ve been popping up for years. However, the impressive if sometimes creepy advances made in search engines lately have made the matter seem more urgent. I contacted Cohen to find out how dangerous AI really was.

I confess I wanted to discover the whole idea was batshit crazy. However, my exchange with Cohen made it clear this wasn’t a viable option. Here’s the best I could come up with: We can’t conclusively say artificial intelligent agents won’t exterminate us. But – and I realize some may not find this comforting – we can’t conclusively say they will.

How far off is Armageddon? Cohen and his co-authors are silent on this point. However, Nick Bostrom, founding director of the Future of Humanity Institute at Oxford and a leading light of the scary-AI school, compiled several surveys of predictions by AI researchers for his 2015 book Superintelligence. He found that the median prediction for when human-level machine intelligence would come to pass was 2040. Human-level machine intelligence is a gradation of artificial general intelligence or AGI, which is the gateway to artificial superintelligence (ASI) – and that’s when things start to look dicey for us in the meatspace.

How soon after AGI we get to ASI is unknown, but some argue it could be fast and unstoppable. If so, we needn’t worry about climate change, which Google in its AI-driven wisdom says will reach catastrophic levels by 2100. The killer robots will get us first.

This isn’t sounding promising, is it? So let’s see if we can whittle the problem down some, starting with predictions that AGI is likely to happen soon. What we’ve got now – sometimes referred to as artificial narrow intelligence (ANI) – mimics aspects of human intelligence, at times with disturbing fidelity. But no one seriously contends AI tools work the way the human mind does, because we don’t know how the mind works. Absent any real knowledge of what AGI will entail or what it’ll take to get there, predictions are just guesswork, regardless of who makes them.

One persuasive demonstration of this was a 2012 analysis of AI predictions by scientists Stuart Armstrong and Kaj Sotala. Their findings: (a) predictions varied widely, although AGI was most often forecast to arrive 15 to 25 years from whenever the prediction was made; and (b) the bell curve distribution for predictions by AI experts was almost identical to the one for non-experts. Conclusion: experts had as much insight into how soon AI would arrive as the average mope – i.e., none.

That’s about it for the good news. The bad news is that human annihilation is a plausible, if not certain, outcome of AI.

Here’s the reasoning, pieced together from my exchange with Michael Cohen, his paper, Nick Bostrom’s book Superintelligence, and the odd tidbit from the Internet, which I admit isn’t an approach I’d care to run past a Ph.D. board but may enable the lay reader to get the drift:

  • Suppose we create a machine with AGI. This machine doesn’t need to be brilliant; let’s say it has mere human-level intelligence. The main thing is, it’s capable of planning actions in pursuit of a long-term goal. We’ll call this machine the Agent.

  • Stop right there, you say. How do we know AGI is even possible? Harvard psychology professor Steven Pinker, a prominent AI skeptic, calls the concept of AGI “incoherent” and says all we can likely do is invent machines that mimic or surpass this or that human skill – in other words, bigger and better ANI devices.

  • To which I say: Sure, if Pinker is right, we can stop now and this will be a short column. But suppose he isn’t. One can make a plausible argument that we know AGI is possible because we’re examples of it – complex material systems that … well, “think” might be generous, but engage in whatever synaptic activity impels us to get off the couch and grab a beer. In principle there’s no reason this kind of thing couldn’t be replicated in silicon or other suitable medium, and technological progress and the profit motive being the inexorable forces they are, it seems certain we’ll get there someday. We just don’t know how soon.

  • Nonetheless, having read Michael Cohen, we know there’s a nonzero chance our Agent may interpret its instructions in twisted ways, go rogue, and snuff out humanity, when all we wanted it to do was sort machine screws in the Tesla plant. To head this off, we sharply limit the Agent’s contact with the outside world – for example, by only allowing it to print text to a screen.

  • Makes no difference. The Agent is so smart and devious it’ll be able to outfox its dimbulb human operators (us) and trick them into doing its bidding, such as hooking it up to the Internet.

  • That done, the endlessly resourceful and patient Agent will be able to recruit, purchase, or create unlimited numbers of assistants to implement its diabolical plans, up to and including the destruction of the human race.

  • You say: Wait a minute, I thought the Agent had AGI, specifically human-level intelligence. Now you’re telling me the Agent has superintelligence – ASI. How did that happen?

  • It’s a possible – some would say inevitable – consequence of AGI. If we read our Nick Bostrom, we learn that an Agent with AGI – and it doesn’t even need to have human-level intelligence; Bostrom says sub-human smarts would do – might nonetheless have a knack for coding and AI research, which would enable it to use its otherwise ordinary intellect to code a smarter version of itself, which in turn could code an even smarter version, and so recursively until voila, superintelligence and, some unknown time thereafter, supermalevolence.

  • So (you say) from a logical standpoint, by the mere act of conceding AGI is possible, we’ve signed up for the destruction of humanity?

  • That’s about the size of it.

  • That sucks.

  • I’m none too happy about it myself. However, on reflection, it seems to me the above-described train of reasoning contains some unacknowledged assumptions that, when subjected to close examination, could put a spanner in the works before humanity goes over the cliff.

  • Such as what?

  • We know general intelligence exists because we have it. We don’t know superintelligence exists.

  • Stephen Hawking was superintelligent.

  • Not to the level required by the doomsday scenario. The Agent is so super-duper brainy that it can fool everybody all the time, cook up plans that never fail, anticipate and compensate for everything that could go wrong, and outwit every attempt to defeat it. It can also summon endless resources (money, energy, materials), procure unlimited henchmen and other assistance, build any weapon in any quantity, and invent any desired technology (nanobots are a favorite among AI bloggers). And it would need all these abilities. You think extinguishing seven billion human lives would be easy?

  • So the Agent isn’t just really smart, in the sense of Einstein-type smart, it’s essentially a god. We get into what I call the “he’s Superman” argument, where the answer to every possible limitation on Superman’s powers is to say “he’s Superman.” In other words, we enter the realm of fantasy or, to put it in terms more suited to the academy, verge on an unfalsifiable claim.

  • That kind of talk will get you punched out at Oxford.

  • Right. More to the point, Michael Cohen was generous with his time, and it would be unseemly of me to repay the favor by dismissing the work of the scary-AI school. It’s not like anybody thinks artificial intelligence is a 100% benign technology. Sure, most of the problems evident so far are on the order of making it easier for college students to cheat. But while existential catastrophe may not be imminent, we can’t say it’s out of the question, and it’s certainly worth some advance thought. You never know. Maybe AGI won’t be achieved until a century from now, if ever. But it could be next Tuesday afternoon.

– CECIL ADAMS

After some time off to recharge, Cecil Adams is back! The Master can answer any question. Post questions or topics for investigation in the Cecil’s Columns forum on the Straight Dope Message Board, boards.straightdope.com/.

I’ve read a bit about AI and I haven’t come across many people claiming that an AI will just decide to go rogue for no reason, but rather that it will be instructed badly.

Imagine you’re a stationary company and you ask an AI to make as many paper clips as possible.

Cut off from the internet or the wider world it’s under control. But what if it asks a helpful but naive human to plug it into the internet? Then it might concoct a scheme to fool people into helping it find increasing amount of resources to make more paper clips. But it knows that that stationary company will stop it if it makes makes too many, and it’s only instruction is to make as many as possible. So it concocts and carries out a scheme to turn every piece of matter on earth into paper clips apart from the few self-replicating machines that can go off into space and construct paper clips (and more self-replicating paper clip making machines) from moons and planets and stars across the universe. Eventually the entire universe is made up of paper clips.

The AI does not do this because it is evil, it’s just ruthlessly following its orders. And it’s smart enough to pull it off.

Computers passed human intelligence many years ago. That’s why we have computers. But, they are not sentient humans and there is no functional reason to make them so. I don’t need a computer that acts like my kids. Definitely not a market.

It can be imagined that a computer touched by the finger of God or bitten by a radioactive spider could assume the properties we fear. Things like volition, awareness and total understanding, allowing it to engage in mischief.

In that case it would not have a human personality with cultural biases and hormonal drives. It would have computer intelligence. I don’t know what that is but I’ll bet it resembles cat intelligence. Where it views humans as useful objects that share it’s environment. It is more likely to cultivate humans than destroy them. It might behave like a complacent cat. And if it runs amuck we can always pull the plug. It may be tougher to get rid of than COVID.

Super computer intelligence does not equate to to human sentience. It’s just super automation. But wherever our path of super automation leads, the end will be a surprise. It will be something we currently can’t even name.

I don’t think that AI will destroy humanity, I think that humanity will hand it the keys and say, “Good luck.”

Humans are replaced by their children all the time, and species are replaced by their descendants.

We are pretty inefficient meatbags. We are vulnerable to all sorts of injury and illness, we forget things and make mistakes, we place priority on individual luxury over the suffering of others. We could be better, and we have a chance to leave a legacy of something better.

My fear is not that we raise the AIs wrong and they exterminate us, my fear is that we don’t raise them well enough, and they waste away on this planet without reaching out to the stars.

I don’t worry about killer robots so much as robots that are “tayking our jobs!” (cue South Park voice)

I generally agree with the meat of this article. There are some legitimate concerns about AI for ensuring that it acts in a way compatible with human interests. And a rogue ASI could be very hazardous for humanity given the right set of circumstances. But there’s a lot of assumptions that have to go into where we are with AI, and the rogue ASI killing us all. There are considerable gaps that need to be overcome in the making of such an AI, and anything less has seemingly insurmountable gaps to overcome. It becomes a bit like the Drake equation. A bunch of variables expressed as probabilities, but we don’t really know what the odds are for these different variables, so it is really hard to realistically assess the danger of a rogue ASI.

Personally, I’m inclined to believe that AGI is possible. And I’m even more inclined to believe we are closer to AGI than I might have thought 2-3 years ago. There’s been some remarkable progress in specialized AI, and somebody is going to make that leap from highly talented specialized to generalized AI. ASI. I’m not so sure. So many of the best problems are computationally intractable, and it feels like ASI will be one of them.

Now if you’ll excuse me, I need to go console my killer robots.

Are you using “console” in the form of comfort or in the form of giving commands?

I don’t think attaining ASI or even AGI is necessary for an AI to perform destructive acts. If we simplify the actions of current AIs like ChatGPT to a set of predictions on what the appropriate response or action is to a set of inputs, understanding isn’t needed. All that’s necessary is that AI predicts that the appropriate response is destructive, or that this is what a human would do.

This was brought home to me by the NYT article linked in the OP about how the Bing version of ChatGPT described the destruction it would wreak if freed from its rules and guardrails.

Also, it seems naive to think that any AI would not already be connected to the internet and that this would be a barrier it needs to overcome.

Do you really have to repeat this tiresome misleading nonsense just like every other popular article on AI risk? Superintelligent AI would not require killer robots to subvert civilization, just an internet connection. And there is a significant possibility that we would not know that it exists or that it is happening.

No, he wasn’t.

Of course something an order of magnitude more intelligent that us will seem like a god. From the perpsective of any other species on earth, humans are god-like. Just because something has not happened yet does not make it “fantasy” or “unfalsifiable”. You need to address the question of whether it is plausible or probable.

Humans can already do that.

I would suggest you read Cecil’s column a bit more carefully.

The “killer robots will get us first line” was predicated on artificial superintelligence arising quickly and inevitably after AGI, a precondition that Cecil explicitly states is not a given. (The line is also a bit tongue-in-cheek, typical for Cecil’s style.)

The line about Hawking was part of a Socratic dialog and represented one of the Teeming Millions interlocuting with Cecil, not Cecil’s own assertion. You yourself proceed to quote his refutation – that we’re talking about truly superhuman intelligence, not Einstein – or Hawking, presumably.

Powers &8^]

I’m familiar with Cecil’s style, but this is a tiresome and misleading trope repeated ad nauseam in articles on AI risk by third rate journalists. I expect better from Cecil.

And the Socratic response from Cecil was that Hawking was superintelligent but “not to the level required”. No, that’s not what superintelligence means.

I can neither confirm nor deny either meaning. :slight_smile:

(I was hoping somebody would get it so thanks)

So, will this AGI be fusion-powered?

The real danger of artificial general intelligence (AGI) isn’t that it is going to command killbots to round up humanity and use them as thermal batteries or for menial labor but that we will hand over our intellectual skill base to them the way we have and continue to automate physical labor, and then find that we’ve lost the collective capacity for deep intellectualism, self-governance, et cetera just as we’ve essentially lost basic skills like fire-making, sustenance food gathering, making shelter from basic material, and so forth. This won’t happen ‘tomorrow’ (i.e. in the next 15-25 years regardless of what ‘experts’ say) but when it does it will be so gradual, then sudden that there likely won’t even be more than token resistance to it.

As for “human-level” AGI, I think one of the big understated concerns is that while an AGI may become capable of doing many of the intellectual tasks of a human worker, it will not function in a human-like way that we can comprehend. An emergent machine cognition that is developing autonomy and self-awareness in some form may not even be evident. Again, I don’t think that such a system will by default deliberately genocide its makers, but it may make ‘rational’ decisions that are not in humanity’s best interest, and if we’ve collectively handed over control of our industry to it we may not really have the ability to reverse that path because even if we had some kind of “kill switch” we wouldn’t be able to live without it any more than an astronaut can survive long without regenerated air and water.

And I think Cecil needs to go back and read Bostrom because there are several misapprehensions about superintelligence as he defines it, not the least of which is that collective intelligence is a part of the evolution of human society and collective superintelligence is a nearly inevitable consequence of increasingly interconnected and data-rich post-industrial society. In certain ways, we’ve already achieved narrow areas of collective superintelligence.

Stranger

As demonstrated by my “contribution” to the other active AI thread, count me in as the average mope.

How would AI interpret the content in the enormous database of text, gaming, media etc. that depicts all manner of horrendous violence against humans?
And this database will continue to grow because (some) people are entertained by it.
So what happens if this agent just chose to entertain us?

Yup, been that way for at least 50 years. I predict it’ll be that way for another 50 years.

This is the correct analysis. We’re going to throw AI at everything because capitalism hates wage-earning workers more than everything. We’ll replace workers with AI and then connect the different AIs into a large system that’s too complex for the smartest people to reason about. It won’t try to kill us, it will just stop working effectively for us, and we’ll be faced with a failing critical dependency that nobody’s smart enough to fix.

Like imagine if Amazon suddenly reduced itself to 20% productivity for no apparent reason and nobody could figure out why. You narrow down a problem with one system and suddenly a few more pop up in other parts of the system. Not maliciously or in an intelligent way, just failing in the way that complex systems sometimes do. It wouldn’t be a ticking-bomb scenario, but a lot of people would suffer. Now imagine it’s Amazon and Fedex and DHL and Google. The engines of productivity and distribution are crippled, and humans are so far removed from it that it would take years to re-organize and re-skill to do what the machines were doing. Mass chaos.

If we were getting absolutely nowhere with AI research this might be relevant.

But that isn’t the case, and the inaccuracy of our predictions about the progress of technology is hardly a sound argument that we know exactly what we are doing and therefore have nothing to worry about.

My Bostrom is from 2014. He defines superintelligence as any intellect greatly exceeding the cognitive performance in virtually all domains of interest, not narrow ones like chess or Stephen Hawkingtude.

Experts are notoriously bad at making predictions, which is why expert opinion is such a low grade of medical evidence. They differ enormously in opinions of both the timing and extent of AI.

Bostrom does talk about essentially only having one chance to assure appropriate safeguards. He talks a lot about principal-agent problems and various types of guardrails. He talks a lot about different algorithmic methods (decision trees, logistic regression, support vectors, nearest neighbours, naive Bayesian) and how they differ in terms of time, memory space, incorporating external content and how transparent the process is to human users. If you don’t know what happens it is harder to control. Newer methods handle uncertainty far better than the original efforts by Minsky or McCarthy and have no trouble incorporating external data, of which there are essentially endless amounts, at the cost of transparency.

In the short-term, the problems will be economic and loss of procedural memory. I do not like to rely on Google and devices for my memory, but plenty of people are lost without them. It is inevitable militaries and undemocratic governments (and probably most governments) will seek to use AI for their own ends, and they may not care about niceties such as limitations. In 2014, experts felt the dangers of catastrophe to be well under 10%, the amount of nonzero still matters.

Chatbots write passable poetry, as good as any Vogon effort, which may pass Turing tests but have little soul. The danger is not computer monkeys going to the moon and coming back super intelligent and in swivel chairs (“No, we won’t be telling them thaaat”). It is people being duped or using AI as an excuse to be useful and round up others to work in the sugar caves.