Straight Dope 3/3/23: Followup - What are the chances artificial intelligence will destroy humanity?

That would be highly relevant, if there were any evidence that human brains can deal with these problems.

He’s a creationist. That book is published by the Discovery Institute.

There’s a decent summary of Penrose’s idea that consciousness derives from quantum processes on his Wikipedia page. It is not supported by any evidence, and Tegmark showed that decoherence theory implies that Penrose’s ideas are not possible. I think it’s fair to describe Penrose’s ideas in this specific area as “fringe”, although he’s obviously a legendary figure.

Roger Penrose - Wikipedia

The Penrose–Lucas argument about the implications of Gödel’s incompleteness theorem for computational theories of human intelligence has been criticised by mathematicians, computer scientists and philosophers. Many experts in these fields assert that Penrose’s argument fails, though different authors may choose different aspects of the argument to attack

According to chatbots:

“what aspects of neuroscience are not explained by computation?”

While computational approaches have provided tremendous insights into many aspects of neuroscience, there are still several aspects of neuroscience that are not fully explained by computation. Here are some examples:

  1. Consciousness: One of the biggest challenges in neuroscience is to understand how the physical processes in the brain give rise to subjective experiences like consciousness. Although there are computational models of consciousness, they have not been able to fully capture the complexity of this phenomenon.
  2. Emotions: While there are computational models of emotion, they are limited in their ability to explain the full range of emotional experiences that humans and other animals have. For example, emotions often involve complex social interactions that are difficult to model.
  3. Plasticity: The ability of the brain to adapt and change in response to experience, known as plasticity, is still not fully understood. While there are computational models of plasticity, they do not capture the full range of phenomena observed in the brain.
  4. Neural coding: While computational models of neural coding have made significant progress, there is still much that is not fully understood about how the brain represents information. For example, there is still debate about how the brain encodes complex features like faces or scenes.
  5. Neuropathology: Although computational models have been used to study neurological disorders like Alzheimer’s disease and Parkinson’s disease, there is still much that is not understood about the underlying pathology of these conditions.

Overall, while computational approaches have made significant contributions to our understanding of the brain, there are still many aspects of neuroscience that are not fully explained by computation.

That sounds like the ‘God of the gaps’ style of argumentation that creationists use.

No, we can’t (yet) explain everything about how the brain works. But every time we DO learn how something works that was supposedly unique to the brain or we manage to do something in AI that was previously claimed to be unique to human brains, the goalposts get moved. For example, the Turing test was considered a decent test of AI until we developed AI that comes close to passing it. Now people have already started to downplay it as any kind of meaninfgul test. Instead, we now must believe that even if an AI passes it, it’s just a ‘trick’ - an unthinking deterministic response meant to masquerade as thinking.

There are just a lot of people invested in the notion that humans are in some way special, that brains have a unique ‘qualia’ that can’t be captured. Obviously creationists feel this way, but I am seeing a lot of other people flatly disbelieving that the brain computes results, that AIs may in fact be doing what brains do, and that some day there may be a thinking, feeling being that evolved from silicon rather than from biology.

There may in fact be something about the brain that enables consciousness but which is not captured in AI designs. Maybe something having to do with the chemical soup the brain lives in, or some unique structures we don’t understand, or the limbic austem, or whatever. But we have zero evidence for that right now. As far as we’ve gone with brain research and AI research, we have not found anything disqualifying like that.

And AIs are doing things already that many people thought was either impossible or a long way into the future just a few years ago. For instance, the scientist @Crane linked to who said in 2019 that we wouldn’t have AIs sophisticated enough for us to even ponder if they could pass the Turing test for at least 50 years. He missed that estimate by 47 years.

Though some of those are fair points, the gaps exist on both sides of the argument. The evidence for computation in regards to things like consciousness and emotions and the other listed items seems thin. It is possible future events will or won’t prove or disprove they can be computed. No doubt some, even many, of the things a brain does are analogous to computing. I don’t think we know enough to conclude all of them are. There is perhaps a certain level of hubris involved in claiming so based on what we currently know.

AIs have flown past humans in recall and text manipulation. I suspect they will soon make the Turing test irrelevant. A large segment of the population will believe they are conscious. That will produce some interesting social disruption.

You have resolved these matters to your own satisfaction. That puts you in a good position to observe the whole AI thing as it unfolds. Most people have no idea how a computer works.

But I believe the brain is different. We lack the broad computational abilities of computers. Computational brains would be large an inefficient. Pattern matching works better for us. The Creationists are just defending a rabbit hole.

No, it is what a universal Turing machine can compute, not what it can simulate. Simulations are behavioral interpretations made by observers. Turings’ statement is expanding (all computing) and limiting (only computing).

I guess because ChatGPT would never ask me a question unprompted?

I feel like the whole making brains into computers and computers into brains is a bit of a philosophical red herring. In many cases we don’t understand how the algorithms that drive many of our current computer systems make decisions, but we enable those computers to manage our energy, financial, supply chain, defense, air traffic control and other systems. Long before we are able to create superintelligent AI we are going to have power, nearly autonomous systems controlling even larger segments of critical infrastructure. They don’t need to be “malicious” or have any thoughts or feelings at all. Just poorly programed or vulnerable to hacking by bad actors, causing it to behave in an unexpected and harmful manner.

I don’t envision Skynet or Cylons or The Matrix or anything like that. What I envision is something a bit more mundane. Something along the lines of a poorly aligned AI accidently (or intentionaly through a bad actor) crashing the stock market or shutting down an energy grid or supply chain network. There might not be a “manual override” and if it takes people long enough time to fix, it might lead to other cascading failures. Maybe not extinction level, but certainly serious enough to cause major societal disruptions.

First sentence:

From the wiki:

“When Alan Turing came up with the idea of a universal machine he had in mind the simplest computing model powerful enough to calculate all possible functions that can be calculated. Claude Shannon first explicitly posed the question of finding the smallest possible universal Turing machine in 1956. He showed that two symbols were sufficient so long as enough states were used (or vice versa), and that it was always possible to exchange states for symbols. He also showed that no universal Turing machine of one state could exist.”

Turings’ original concept simulates machines. It has nothing to do with software simulations. Shannons’ realization demonstrated that a 2 input binary adder is sufficient to calculate all possible functions that can be calculated. Hence all computation only requires an adding machine.

So if the brain is computational, all of its’ functions can be performed by a computer. If the brain is not computational then a computer cannot perform brain functions.

Of course a computer can act human, but that’s like the figures in a wax museum. They are simulations too.

That is clearly a statement about what can minimally be a universal Turing machine, and explicitly not a limitation on what a universal Turing machine can do.

It simulates other Turing machines, and the use of “machine” here is not in the everyday sense of literally being some specific mechanical device, it refers to abstracted computation. The use of “simulate” here is not in the same context as “software simulation”, but it certainly encompasses it - if you are running a software simulation you are doing computation, so a universal Turing machine can simulate your software simulation.

A Turing machine is a mathematical model of computation…a mathematical description of a very simple device capable of arbitrary computations

Turing machine - Wikipedia

…people get confused about Turing machines because of the word “machine” — confusingly, Turing machines are not machines . Indeed, the definition assumes some very unrealistic components (like an infinite tape). It is important to think of Turing machines as just a mathematical tool.

…Many people mistakenly think that Turing machines cannot implement analog functions because they are not defined by a discrete set of step-by-step values. But remember, Turing machines are not machines . They’re abstract mathematical constructs…

… any procedure for solving a computable function can be implemented by a Turing machine, by definition , no matter the specific mechanism involved

Yes, the brain is a computer…. No, it’s not a metaphor | by Blake Richards | The Spike | Medium

Correct. And your earlier claim that there is “ample evidence” that the brain is doing something other than computation (none of which you have cited) seems to be based on a misunderstanding of the relevant definition of computation - the breadth of what a universal Turing machine can simulate.

I did not say there is ample evidence the brain is not computational. I said I had sufficient evidence for my self. I did not make an argument. I believe it’s an argument for the academics.

We can discuss it but that is not the point now. We are still exploring Turing Machines.

Shannon proved that all Turing Machines can be replaced with a binary adder. A binary adder can perform all computable functions.

An analog computer can perform computations. In that capacity it is a Turing Machine. In fact I worked on an analog F-100 flight simulator. It did everything by analog computation. It simulated all the functions of flight. Nobody ever accused it of flying.

You keep repeating this. Why? It is a statement about what can minimally be a universal Turing machine. It does not speak to the breadth of what a universal Turing machine can do.

It’s a trivial fallacy. X is a universal Turing machine does not imply that all Turing machines are X.

So what? Are you suggesting that there are no computers capable of flying?

Facts are redundant.

Shannon proved that the universal Turing Machine is a binary adder. That kind of explains the success of binary computers. They can perform all possible computations. That’s it. Universal machine performs all possible computations. There’s nothing else.

I don’t understand the argument you trying to make. Are you saying there is something beyond computation? Perhaps so, but that would fall outside of the definition of Turing Machines. Are you saying that software simulations are reality?

It’s my fault but I am not following you.

These conversations all quickly veer towards the question of sentience. (Despite the fact that artificial intelligences could do damage without at any point becoming sentient, that tends to be the primordial human fear, that they’ll start taking deliberate intentional action against us).

It’s probably worth reiterating that we have (and have had) people on this board who do not believe we ourselves are conscious. And many more who don’t specifically challenge the notion that we’re conscious but deny that we ever take deliberate intentional action that isn’t more accurately attributed to deterministic causation. So here, too, if one is going to assert that the AIs “don’t do what we do”, I think one has to be explicit about what it is that you think we do, or are, that makes what is true of the AI not true for us as well.

Not that I agree with them, mind you (I’m no determinist when I participate on those arguments), but clearly you can’t start with the assumption that we’re all on the same page as far as what we think the human mind and human consciousness is all about.

Yes, flying is not possible through the simple expedient of computation. No matter how hard you simulate, you’ll never get it off of the ground.

never mind

Your deletion was interesting.

Claude Shannon was a trivial fallacy?

Wow!!!