No, we don’t know what the computer is doing, and the folks who made ChatGPT and other LLMs explicitly admit they don’t know how it is doing what it does. This has been explained to you multiple times in multiple threads.
Saying that ‘we know what it’s doing’ because we understand microprocessors is like saying that we understand consciousness because we understand electrochemistry. The hardware of the computer or the ‘wetware’ of the brain are just the substrates on which neural networks are built. It’s the neural nets themselves that are evolved/trained and not programmed, and which are hellishly complex and mostly inscrutable.
There are efforts going on to use small neural nets and lots of instrumentation to figure out how they work, and we’re making a little progress in discovering circuits, ‘induction heads’ and other evolved structures. We’ve also found structures that look pretyy much like structures evolved in the human brain.
Today, large language models are black boxes. No one knows how they do what they do.
I don’t know what you mean. Nobody has claimed that “everything” is computation.
And others have said here that they don’t agree, but nobody has any suggestion for what the brain is doing other than computation.
There is not only no evidence that the brain is doing something other than computation, nobody has given any coherent hypothesis for what they mean by “something else”.
Well, it’s not a sideshow when people use it as the basis for a belief that AGI (and therefore ASI) won’t happen.
I’m not. I’m saying that part of what’s going on in the human as a whole isn’t computation.
I’m not saying that emotions don’t also happen in the brain, or that the brain is doing something other than computation. I’m saying that emotions don’t happen entirely in the brain.
There are many correlates of emotional states outside the brain. But it is curious to suggest that the experience of emotion happens anywhere other than the brain.
If I burn my finger, surely you don’t think that my finger is experiencing the burning.
I think your body’s endocrine response to that is happening all the way through your body; and that, without it, your brain’s calculation that you should move your finger away from the stove would be an entirely different thing.
Consider the nerve connections between your finger and your brain. Do you think it would be impossible (in principle) to hook the brain up to an artificial source of nerve impules that would make your brain think that you have a finger and that it is hot?
Being surprised by results is a positive. It means “Wow” not “Ooooo spooky”. There is no mystery about how neural nets work. Of course there is research on better understanding their fine points. Nothing spooky about that.
If you’re simply going to redefine “emotion” to include all the physical correlates of emotion, then sure.
The question is, do you really think the experience of emotion is happening somewhere other your brain?
But in any event, if you’re going to argue that human intelligence is partly located outside of the brain, that doesn’t really change anything. If your want to say that the endocrine system is part of human intelligence, what is that doing other than information processing - i.e. computation?
Remember that the basis for this discussion is the claim that there is something that humans do that a machine cannot (in principle) do. What is your endocrine system doing that a machine could not do?
Those processes (and others) are controlled by the autonomic nervous system (further divided into the sympathetic and parasympathetic systems…turtles all the way down) which is a component of the peripheral nervous system, PNS (nerves and the ganglia), which is basically the conduit between the central nervous system, CNS (brain and spinal cord) and the body parts/organs. No consciousness is involved with the PNS. It functions…automatically/autonomously. The CNS is the domain of consciousness.
The Merck Manual explains the PNS clearly and easily understood:
Not to put words into your mouth, but I suspect that your underlying reasoning is - we don’t understand the basis of consciousness and subjective experience, therefore a machine intelligence cannot do what a human can do?
The fact that we don’t understand consciousness certainly introduces uncertainty, but I don’t think it warrants that conclusion. There is just no hypothesis (and certainly no evidence) that biological intelligences do anything other than computation. The endocrine system certainly isn’t remotely an explanation of subjective experience. We have no explanation whatsoever. Consciousness just seems to be an emergent property in the evolution of biological intelligence, and there is no evdience to imply that it could not emerge in machine intelligence.
Furthermore, the fact that an articificial intelligence might not be conscious does not place any constraints on its competence. Perhaps the most frightening prospect is that we might develop superintelligence that supercedes us because of its vastly greater competence - but it is not conscious, and there is nobody left to experience the universe.
Really isn’t any evidence either way, as far as I can tell. But I was talking about the emergence of emotion, not only of consciousness.
Isn’t that where the idea of the AI trying to turn the whole universe into paper clips comes from? Hyper competence; and no compassion for anybody destroyed in the process.
I’m not at all sure you couldn’t get that along with consciousness. But it isn’t only our consciousness that makes us human. (Not that all humans care who or what gets destroyed in the process of trying to get something done, either.)
Yeah. Likewise you could say guitar pics are tangled up with musicians, but I wouldn’t attribute credit to them when Bob Dylan wrote Tangled Up in Blue.
A big difference is that you can get a read out on what every neuron is doing, what it connections and weights are, and you can reset and repeat stimuli. It’s a lot harder to get that information out of a brain.
At this point, NNs are “simple” enough that we could theoretically analyze them and understand how they work. But it would be an exceptionally difficult task, only getting more difficult exponentially for every neuron added.
I don’t think that current implementations of AI are nearly complex enough for self-awareness to emerge, but I don’t think that the limiting factor is that it’s made of silicon vs carbon. How big and complex it needs to be is hard to say, but I think it will be lower than some may think. Our brains aren’t necessarily all that efficient, and we don’t need all of it to be self aware, so it’s entirely possible that a computer can become self aware with a fraction of the complexity of a human brain.
Emotion arose as evolution’s way of implementing sophisticated behavior in intelligent animals. If (say) a goal is to obtain food, then a whole lot of environment-dependent decisions and skills and sub-goals are required. Searching, catching, etc. - it would be incredibly difficult to hard code all of this. So “hunger” is evolution’s way of specifying the ultimate goal of finding food, while leaving all of the details of exactly how we do it to our general intelligence that’s already equipped with many skills and the ability to understand and reason and react to the specifics of the environment.
Without consciousness, an emotion is really just the same kind of thing as a reward function. The subjective experience of an emotion is what a reward function feels like to a conscious intelligence.
True. And that’s exactly what they are doing with ‘trivial’ neural nets, like the one I mentioned earlier - 50,000 parameters in a single layer. That little NN managed to teach itself trig identities and fast fourier transforms to solve addition problems. It was extremely difficult figuring that out. Now imagine 175 billion parameters arranged in 96 layers…
The process for figuring out what’s going on in a neural net is called “Mechanistic Interpretability.” Here are some good links to information about the efforts to understand neural nets:
I provided that link originally, I believe. And Wolfram describes a lot about the basics of training, some of what we have discovered about structure, etc. But he acknowledges repeatedly that we don’t know much of what’s going on inside ChatGPT when it’s answering questions, For example:
Yes, there is a loose sense in which everything is a computer - everything is implementing algorithms.
But the article also explains the narrow meaning for computer: a universal Turing machine (aka Turing complete). This is a computer that can implement any algorithm, and can therefore simulate any other computer.
Then we have:
(1) The brain does not just fit the loose “everything” definition of computer, it almost certainly fits the narrow definition - it is probably Turing complete.
(2) There is no suggestion that the brain is doing “something else”, it is a computer that is either Turing complete or it is not.
(3) Even current programming languages are Turing complete - your cellphone is a universal Turing machine (ignoring memory limitations). So a machine intelligence can (theoretically) simulate the human brain.
The brain is a computer, and computation is substrate-independent.