Straight Dope 3/3/23: Followup - What are the chances artificial intelligence will destroy humanity?

The fact is that even though super computers are vastly superior at computation than we are, they exhibit absolutely no evidence of “feelings” . This is pretty good evidence in and of itself. You claim that “feelings” are just a dynamic of computation. Empirical evidence indicates that is incorrect.

Again, what empirical evidence is there that your brain is doing something other than computation? What is your hypothesis for what it might be doing?

ASI SA won’t develop from biological-style natural selection which takes millions of years to evolve. It will evolve from artificial selection, at first.

It will be like how we can direct the development of new breeds of dogs by artificial selection, only much quicker and more unpredictable (we can kind of understand how a dog feels, but understanding how a self-aware ASI feels is uncharted territory).

We can more easily predict how a new breed of dog looks than how it behaves, or more importantly, how it feels about itself and its surroundings.

IOW, I don’t believe we can predict the self-awareness that will emerge from ASI.

I didn’t say that it would. I said that its behavior and goals would derive from our specifications. (Note that “derive from our specifications” does not mean the same thing as “conform to our expectations”.)

Only if that’s the way we set things up.

There is a large body of research on the alignment problem. It’s worth reading rather than just vaguely speculating from scratch.

Adding emotions and internal sensations such as an adrenaline response, hunger and tiredness might be very tricky to do, and might need some kind of analog computation instead of digital. Maybe we could implant a suitably-sized AGI into a cloned human body, which could then experience hunger, fear and love on behalf of the cold, clinical electronic brain.

I think that this probably wouldn’t be an easy option, so the designers will probably try to create some kind of digital simulation of the human endocrine system and other mood-affecting states, something which might not work perfectly either. Making an exact copy of a human in digital form is almost certainly a long way away.

Another well written column.

My points:

  1. It is probably pretty easy to fool all people some of the time.

  2. Scary AI may not be front page news, but since Bostrom in 2014 has been on the minds of the people in a position to make a difference. Musk and others have endorsed these concerns.

  3. I think the human brain is far too complex to be reduced to mere computation. That does not mean it cannot be modelled. We do not yet understand much of the human brain.

  4. There are dangers in AI going rogue. I think the dangers of people deliberately misusing AI are far higher. A number of geopolitical issues involving technology are not encouraging. A system that learns independently in ways not completely transparent will produce results including such human frailties as misinformation, disinformation, error, conscious and unconscious biases, vested interests, willful and unintended ignorance and other misinterpretations. These concerns are not directly reflected in the columns.

  5. The popularization of chatbots is likely to accelerate these concerns. Fortunately, Musk now wants his own chatbot and being less woke will solve all the above problems. Or perhaps not.

  6. Bostrom’s book begins with a story about a group of sparrows. Some wish to acquire an owl to help them build their nests and make their lives easier. One expresses concerns they know little about taming and domesticated owls. They think it risky to raise an owl without this knowledge. Others think they should just raise an owl and can figure out how to domesticate it later. This story, like that of AI, remains deliberately unfinished.

Excerpt from very long read (I.e. brief in context):

“Gary Marcus, has made a robust defence of the computer metaphor: “Computers are, in a nutshell, systematic architectures that take inputs, encode and manipulate information, and transform their inputs into outputs. Brains are, so far as we can tell, exactly that. The real question isn’t whether the brain is an information processor, per se, but rather how do brains store and encode information, and what operations do they perform over that information, once it is encoded.”

Marcus went on to argue that the task of neuroscience is to “reverse engineer” the brain, much as one might study a computer, examining its components and their interconnections to decipher how it works. This suggestion has been around for some time. In 1989, Crick recognized its attractiveness, but felt it would fail, because of the brain’s complex and messy evolutionary history – he dramatically claimed it would be like trying to reverse engineer a piece of “alien technology”. Attempts to find an overall explanation of how the brain works that flow logically from its structure would be doomed to failure, he argued, because the starting point is almost certainly wrong – there is no overall logic.

Reverse engineering a computer is often used as a thought experiment to show how, in principle, we might understand the brain. Inevitably, these thought experiments are successful, encouraging us to pursue this way of understanding the squishy organs in our heads. But in 2017, a pair of neuroscientists decided to actually do the experiment on a real computer chip, which had a real logic and real components with clearly designed functions. Things did not go as expected.

The duo – Eric Jonas and Konrad Paul Kording – employed the very techniques they normally used to analyse the brain and applied them to the MOS 6507 processor found in computers from the late 70s and early 80s that enabled those machines to run video games such as Donkey Kong and Space Invaders.

First, they obtained the connectome of the chip by scanning the 3510 enhancement-mode transistors it contained and simulating the device on a modern computer (including running the games programmes for 10 seconds). They then used the full range of neuroscientific techniques, such as “lesions” (removing transistors from the simulation), analysing the “spiking” activity of the virtual transistors and studying their connectivity, observing the effect of various manipulations on the behaviour of the system, as measured by its ability to launch each of the games.

Despite deploying this powerful analytical armoury, and despite the fact that there is a clear explanation for how the chip works (it has “ground truth”, in technospeak), the study failed to detect the hierarchy of information processing that occurs inside the chip. As Jonas and Kording put it, the techniques fell short of producing “a meaningful understanding”. Their conclusion was bleak: “Ultimately, the problem is not that neuroscientists could not understand a microprocessor, the problem is that they would not understand it given the approaches they are currently taking.”

This sobering outcome suggests that, despite the attractiveness of the computer metaphor and the fact that brains do indeed process information and somehow represent the external world, we still need to make significant theoretical breakthroughs in order to make progress. Even if our brains were designed along logical lines, which they are not, our present conceptual and analytical tools would be completely inadequate for the task of explaining them…”

Source:

That article is talking about the question of whether brains work in a manner that even vaguely resembles current computer architecture.

But there is nothing in there suggesting that a brain is fundamentally doing anything other than computation.

Well, AI experts might disagree. But definitions may matter.

This is the wrong way to look at it. We’ve known for a long time that a potential path to artificial intelligence goes through neural nets, and we’ve known how to build neural nets for decades. But there have been relatively recent changes that make neural nets of a human-like scale possible:

  • The rise of advanced graphical processing units that turn out to be excellent for running neural net software.
  • Cloud computing, allow distributed computation in real time.
  • Access to enough training data.
  • Moore’s law

It cost something like $30 million and six months to train ChatGPT on Microsoft’s cloud supercomputer environment. ChatGPT is burning up $100,000 per day in compute costs and energy. Something like ChatGPT would have been completely impossible financially and time-wise just a few years ago.

The ‘adjacent possible’ (necessary precursors being developed) for intelligent large langage models only opened around the second half of the last decade. It’s amazing how far they’ve come in a very short time.

And don’t get hooked up on ‘computers’ getting more intelligent. It’s the neural net running on top of them that’s becoming more intelligent, and it functions like neural nets everywhere, including in human brains.

That article comes from the stone age - all the way back to 2019. And Yuri makes a bunch of claims that seem to me to be either wrong, in dispute, or irrelevant. Among them:

  • That brains don’t compute.
  • That there is no ‘programming’ going on in a brain.
  • That brains are not running any kind of algorithm.
  • That large neural nets would require massive parallel computing because ‘each neuron would be a microprocessor’
  • A microprocessor needs a programmer. Since no one programs the brain, it’s not a computer.

To me, all those are rather specious arguments. Recent studies of neural nets reveal that they are full of ‘algorithms’ and ‘programing’. It’s just evolved instead of designed. The brain seems to function the same way. And we do not assign a microprocessor to every ‘neuron’. we represent neurons logically and the parameters are like synapses, and can be fully contained in a vector.

He also says that it will take 50 years before AI is even close enough for us to ask questions about whether it may become truly intelligent. He missed that estimate by about 47 years, as it’s clear that they are already at the point where we can at least start evaluating the question.

Finally, and although this isn’t disqualifying it should be cause for worry, the interview is part of a series on AI put on by the Discovery Institute, a creationist organization. I would strongly suspect they have a bias towards believing that AI cannot be like a brain because brains were designed by God. Some at the Discovery Institute think that emergence itself doesn’t even exist, because that would contradict God’s plan or something.

I’d agree with your first three points. But the fact brains compute, use shortcuts and algorithms and are programmed by both innate and experienced things does not (to me) suffice to make the brain a computer in the traditional sense, though it remains a useful model. Brains, to a limited degree, create reality by selective interpretation of too much data and the ways this happens are not well understood. I don’t think creationism is necessary to explain evolving brains and was unaware this was a possible influence. I don’t think enough is known about consciousness to really fully address that issue using substance over semantics.

I guess it depend on your definition of ‘computer’. A computer is something that operates on information through a series of instructions, sequences, or programming. That’s certainly what the brain does. It’s what ChatGPT does.

People are getting too hooked up on the hardware side of computing. No, brains do not look like CPUs. But the software that runs on a CPU can look and work like a brain, and that’s all that matters. Brains do not have ‘programmers’ writing the algorithms for image recognition, language, or other high level functions, but then neither does ChatGPT’s neural net. All that stuff is emergent. The ‘programming’ that allows it evolved through iteration, just like our brains.

There may be differences between digital brains and wetware brains, but the stuff he mentions isn’t it.

I don’t think the human brain is “doing something different” from what a computer does.

The human brain is extremely adept at doing something that we haven’t as of yet programmed computers to try to do, though: to start with some really basic axioms for processing basic inputs and some spectacularly good algorithms for recognizing patterns, and then giving them an impetus for acting on the patterns thus recognized and giving them a reason to engage with the world (especially the social world) around them.

All AIs so far have been given rather explicit instructions rather than “here is input, your goal is to make snese of it and engage with it, have at it”.

Was consciousness planned for in pre-conscious-brained species somewhere along the evolutionary tree? Or, did it emerge independently, perhaps by accident, as some brains grew more complex?

Is there a consciousness organ in the conscious brain? If so, exactly where is it, and from what cells does it derive? Are these supposedly conscious cells different in some way from similar cells in non-conscious brains?

Was self-awareness necessary for humans to survive and evolve, or could we procreate, carry on quite well, and survive as zombies? Is self-awareness a feature, or simply a bug that happened to be something helpful that we (and orcas, and octopuses, and cats :face_with_monocle:…) enjoy and allows us to expand our thinking in new, creative directions?

Just because we don’t understand much about how or where consciousness emerged doesn’t mean it can’t or won’t emerge artificially in sufficiently complex inorganic brains. Maybe it can’t, but I think it can and will because there’s nothing special about organic brains other than that’s the pathway on which life on Earth evolved.

I believe there’s advanced extraterrestrial life elsewhere in the universe, and although their brains will most probably be built on completely different physiology and cell structure from ours, they too may be conscious beings. IOW, I don’t believe consciousness needs Earth-like biology to emerge, it can use other pathways to do so. And, I don’t believe those pathways need be organic.

All we know for sure is that self-awareness occurs along with relatively advanced cognition, we just don’t know how or why. Someday AI’s thinking will be more advanced than human’s. AI doesn’t need millions of years of trial and error selection to evolve, as we did.

Unlike biological life, AI does have an “intelligent designer” who can skip those millions of years of trial and error, and deliver a finished advanced-thinking brain quickly and without all the detritus biologically evolved brains have accumulated, often mucking things up. It can be a pure, unadulterated thinking machine. If consciousness emerges from advanced thinking, then it should emerge from advanced-thinking AI. It’s not quite there, yet.

I don’t believe we “intelligent designers” will plan for AI consciousness or self-awareness. It will emerge independently and unplanned on its own. It will be a bug. A bug that the ASI will use to its advantage and run with, and why shouldn’t it?

On the other hand, maybe only a handful of conscious, self-aware creatures exist in the universe and they all live on Earth. I just wouldn’t bet on it.

Our computers have advanced light years over the last 30 years

Apropos of nothing, I’ve never liked this use of the term “light years”. It implies that our computers are now somewhere in interstellar space.

Computers are so advanced now, they can do the Kessel run in less than 12 parsecs.

Chuck Norris recently invented a Chatbot AI. It could beat people up, and other wondrous feats.

If alignment is that difficult, how can any behavior as coherent as “kill all humans” emerge to begin with? Babbling idiots usually aren’t very dangerous.

If ASI does eventually pose a threat to humanity, I guess we could just EMP ourselves back to the middle ages.