I find the things you have to say about the nature of consciousness interesting, but when you talk about how computation is ‘the manipulation of certain signs’ that only gain meaning upon interpretation, how or why does our human interpretation give meaning to those manipulated signs? Aren’t our human brains just extremely complicated organic computational devices?
Going back to the ‘Chinese Box Argument’ I mentioned in my OP, it involves a box that translates Chinese into English, or vice versa. You input a page of Chinese characters into a slot in the box and out comes a translation. But inside is a person who doesn’t understand Chinese at all, they simply have a translation guide which they use to look up the characters and supply the English equivalent. The system works perfectly to translate Chinese, but at no point does a human (or an intelligent AI) understand Chinese, the system is manipulating signs without any meaning from interpretation. But isn’t that what we are doing when we learn a language? We start with rote memorization, then we think we may eventually reach a level of fluency such that we truly understand the nuances of the language. But do we really, or have we just reached a very high level of rote memorization?
This is like the old ‘fake it till you make it’ saying. Can an AI fake consciousness (I won’t say intelligence because I think AI is already plenty intelligent, but intelligence does not equal consciousness) until it becomes so close to seeming conscious that it, for all intents and purposes, is? Or is there some key quality to human (and even higher level animal) consciousness that elevates and separates it from AI?
It seems to keep going back to the question of ‘what is the nature of consciousness / volition / free will?’. As I’ve said, as amazing as the state of current AI is, when you interact with it for a little while it becomes clear that something is still missing; that AI as it currently exists has a ways to go before it becomes fully conscious in the way a human is.
But then, how conscious are we even, really? A large part of me feels that we fool ourselves into thinking that we are truly conscious, and that we have much, if any, free will at all. I do believe that this may very well be true…
In my view, symbols in the human mind are complex, self-reflective, self-reproducing and by that, self-interpreting things—they have to be, because, if thinking is a symbolic activity, and interpreting symbols requires (not necessarily conscious) thought, we’re locked in what’s known as the homunculus regress. (I’ve tried to give an introduction to the idea here.)
They certainly compute, but I don’t believe that’s all they’re doing. Computation is ultimately a structural notion, defining relations between certain entities (say, numbers and their additive structure) and mirroring those in a concrete physical substrate (say, the way certain lamps light up after you push certain buttons). But structure underdetermines its domain: we can always adduce a different interpretation from the intended one. So that a certain device performs a certain computation is only one of its possible interpretations. We need something that fixes the reference, which can’t itself be merely computational.
The self-reading/self-reproducing signs alluded to above fulfill this requirement: concretely, one can show that they bump up against certain computationally undecidable statements related to their own computational powers—sort of analogous to the halting problem. Their capacity for self-introspection then serves to bridge that gap, by giving them access to their own properties that isn’t theoretically mediated, but direct in a certain sense. This gives them capabilities that formally outstrip anything a computer is capable of. (This I’ve also tried to elucidate here.)
As for the Chinese Room, I think it’s ultimately insufficient as an argument: there’s nothing preventing anybody to hold that the entire system understands Chinese, even if its inhabitant fails to do so. (Although I don’t think it works to say that we just ‘get better’ at rote translation, because then, we wouldn’t really recognize the difference between our language capacities and those of somebody only carrying out that rote translation in the first place.)
If I wasn’t conscious we wouldn’t be having this discussion. That some parts of my brain operate invisibly to me doesn’t mean that some don’t.
Here’s an example. Say I have a program that tells me what it is doing by spitting out stuff on the console. But if I run it in background, the output goes to /dev/null. Seeing me and the program as a system, I might terminate the foreground one after a while, while the background run, not annoying me, could run until it produces a result.
Or think about your PC. You consciously run a browser, Word, etc. But look at your task manager, and see how many processes you never explicitly started, and which are doing stuff invisible to you.
I agree that looking intelligent doesn’t mean it is intelligent. Heck, Eliza fooled Weizenbaum’s secretary.
We’ve know a long time that simple syntactic translation, like that in the Chinese room problem, doesn’t work, and doesn’t convince anyone the room knows Chinese. You need at least some semantics, which you can’t put on cards. And the cards would never convince anyone the room knows anything. Just send it one “explain the reasoning behind the card I entered six cards ago” or “does card n-5 contradict card n-6?” The Chinese Room as originally described does not have state, and would need a nearly infinite number of translation rules. I think we can all agree that a processor which cannot simulate a Turing machine can never gain intelligence, and when you add capabilities to the room to make it more like a general purpose computer than you are begging the question, since you’d just be asserting it can’t understand.
As I mentioned, AI when I took it had the idea that if you could do the list of things an intelligent entity does, then you’d become intelligent, with that list including chess, facial recognition, speech understanding (but dogs do that to some extent) and playing Jeopardy. I don’t think this strategy has worked, though it has made a lot of money.
Yes, well, the Chinese room thought experiment is kind of simplistic, at least the way in which I summarized it, but let’s say the person inside the room, after years of following the manual, gets so familiar with Mandarin Chinese that they can understand it almost as fluently as a native of the language. Then they would be able to answer questions such as “explain the reasoning behind your previous answer”. At what level of understanding can it be said that the person doing the translating ‘knows’ Chinese?
And how is it not the same with AI? I’m not arguing that it is the same, I simply don’t know. Is true understanding of a concept or system, whatever that means, simply a matter of degree, or is there something more there? The answer might lie in the information presented by @Half_Man_Half_Wit in their last post, but I have to admit I’m still digesting that
And most importantly self organizing from the beginning. Animal bodies do not create brains according to some plan and then insert programs into them. The brain, it’s physical organization and It’s program are created from scratch by each and every individual brain.
Volition is not a product of the brain.
The brain is an integral product of volition.
The embryonic brain of a puppy or child sends out arbitrary signals to it’s appendages and responds to the feedback it receives through it’s sensors. An unfertilized egg has a front, back, top, bottom, left and right. The embryonic brain samples it’s environment to match it to this internal orientation. It creates the neuronal connections and weights that fit what it observes. The puppy and child roll over, then crawl stumble and eventually walk. Then they continue to explore, all the time wiring their brains with information about their environments. The sheep dog becomes canny about dog stuff and the child learns to drive a car. But, it is all driven by volition from the first instant of brain activity.
I have some ideas about possible brain architectures and used to consider implementing small parts of them. But the problem is complexity. The brain is a workshop where millions of skilled molecules continually manufacture new parts and create new paths. It’s not a static thing like a cell phone. It is always building itself in response to the information it gathers.
So the Venn diagrams of computers and brains overlap but they are not congruent. Both can accomplish some of the same tasks and each has properties that are unique. Brains and digital computers are not the same They just can do some of the same things, kind of like the kid and the sheep dog.
Some of my sources are articles, and You Tube videos, by Dr. Gyorgy Buszaki, books: Other Minds by Peter Godfrey-Smith, Endless Forms Most Beautiful by Sean Carroll, EVO DEVO (Evolutionary Development) by Sean Carroll.
The error here is that the Chinese Room experiment doesn’t involve TRANSLATING Chinese. The operator has an enormous (infinite) manual, and when a list of Chinese symbols on a page comes through the slot, they dutifully leaf through their manual until they find a match for that exact query on one page (the manual contains every possible question). They then carefully copy a set of Chinese characters (the desired response) from the opposing page. They then send back that response.
There is no translation going on, and no way for the operator to ever deduce the meaning of the symbols.
So I was thinking about what it would take to go from ChatGPT to something that could pass the Turing test at least, and would have behaviours that would be hard for us to judge as not being ‘conscious’.
So take ChatGPT as a starting point and add:
A persistent memory of context. Allow ChatGPT to remember its past interactions permanently, not just during a session and infinite length instead of 8K.
‘Idle Introspection’. ChatGPT should have a module that allows it to spend time when not answering prompts to go over past answers and the responses to them, crawl the web for new information, etc. It should be able to continue learning constantly, and share that learning across all its sessions.
Some kind of memory of state. When ChatGPT modifies its ‘brain’, it should keep some memory of how it evaluated things before the changes.
With that, ChatGPT would remember all interactions with you, would ‘learn’ while you were away, would sometimes initiate conversations with you instead of you always having to initiate them, etc.
Right now, if you ask ChatGPT its opinion on something, it will say, “As a large language model, I do not have opinions, and cannot make judgements.” Well, after training on the data from the beta, which consists of billions of prompts, responses, and reactions to the response, I think the next phase will see those results being trained as added layers in the model, giving it some judgement is can use on its own answers to see if they are credible.
What else would you need? Because I think all of these things are just around the corner. Maybe even in GPT-4, which is coming out this year.
Well, there are some that would dispute that, but I’m willing to give you the benefit of the doubt. But I wasn’t claiming anything regarding your conscious experience, my point was merely that abstract thought is not necessary for consciousness—simply being aware of, say, a persistent itch in your left foot is being perfectly conscious, and seems well within the plausible capabilities of a dog, even if abstract thought, or the concept of a self, might not be.
That’s… A surprising claim, to me. Especially in the face of systems like ChatGPT or DeepL, which seem to be doing just that—not perfectly, granted, but I don’t see any particular reason that shouldn’t change. And of course, any finite conversation of arbitrary length can be realized without any semantics at all, simply by putting all possible dialogue trees of a given length into a humongous lookup table (same for all possible translations etc). Also, this would seem to entail that translation isn’t computable at all, since after all, any given program is exactly the sort of thing you could put on cards.
Hmm, I wonder where you get that from. First of all, the person in the Chinese room clearly has a memory, so can’t possibly be stateless. Furthermore, in the original discussion of the ‘systems reply’, the inhabitant is explicitly said to possess a scratch pad for notes, and also, a sufficiently prodigious memory to not actually need it—as Searle’s reply to this strategy is to have them internalize all the components of the room, rulebook, transition table, scratch pad and all. (Also, the original formulation is somewhat more restricted: it’s a test of story understanding, in that a Chinese story is fed into the room together with a set of questions (in Chinese) about this story, which are then answered by associating Chinese symbols.)
It’s not like we now everything about human brains. The “unimportant” brain cells we learned about in medical school have increasingly been shown to do so much more than insulate fibres to maintain signal, or act as mere housekeepers. (If article paywalled will try to work around this.)
Pure computation seems to be mechanical right? Give an input to a system and have that system transform that into an output. It seems that certain biologicals have perceptions that feel pretty damn real. Will an AI experience the color purple or be able to feel pain for example? When it’s circuitry has that capability, then I think it might be closer to biological intelligence. Another puzzle is how would we even know the thing is having experiences as opposed to one of the components of its output is the claim that it is?
These sound like very good features for an AI to have in order to come closer to passing the Turing test, and don’t seem like they would be all that difficult to program (at least on a smaller scale than ChatGPT, so it doesn’t have to store data to remember chats with thousands of humans). I would have thought some form of #2, ‘continuous learning’, would already have been implemented.
It seems like you’re arguing for some transcendent or even spiritual quality that the human mind has that raises it above the level of an AI.
But I think the human brain is really just a meat computer that’s evolved and refined itself in the context of an animal body that’s needed to adapt itself to exist in certain conditions. We have needs and drives, and internal reward and punishment systems that help us to achieve goals and to avoid pain and danger, which also affect our thought process and our perception of ourselves and the world. These qualities, I think, help give us our sense that we ‘really’ experience and ‘really’ feel things. I think those qualities, the emotions, drives, fear of danger and pain, that make us uniquely human / animal will be difficult to reproduce or simulate in an AI, but I don’t think it’s impossible that AI will get to a point in which it will sincerely believe it can ‘experience’ a certain color, or feel real emotional pain. Or at least, believe that it does.
Let us suppose that an AI has its own experiences. (Not merely things it knows about how humans experience the world.) Why would you expect those to be much like human or animal experiences?
I believe we are witnessing that AI can and will simulate all of human experience to the finest detail. But it is done by a program that is external to some computer. To be conscious the process must be an organic part of the system that realizes it.
Not sure what you mean. In what sense is a program “external to some computer” and under what circumstances would it be “an organic part of the system that realizes it”?
I don’t expect that AI experiences would be much like that of humans of animals; it would likely be very different. Unless we intentionally made a simulated environment for the AI as human-like or organic as possible. You seem to be responding to my response to octopus, who seemed to be suggesting that there is some unique quality to ‘biologicals’ that allows them to experience color or feel pain in a ‘real’ way that a mechanical, computational device cannot, and perhaps would not ever be capable of. My response was that I think it is possible that AI will advance to a point that it will be able to ‘experience’ a certain color, or feel real emotional pain. Or at least, believe that it does.
I understand the argument, that in order to be fully conscious in the way a human is, the mind needs to have been an integral, organic part of its body and environment. But I don’t think that I fully agree with this. If as you say, an AI can and will simulate all of human experience to the finest detail, at what point does the simulation not become, for all intents and purposes, the real thing? Is there not a point at which ‘faking it’ becomes ‘making it’?
Let’s take the old sci-fi trope: transferring a human mind into that of a computer. It’s an extremely advanced computer which has all the processing power of a human brain. It also simulates an organic environment, Matrix-style, complete with convincing simulations of pleasure, pain, emotional feedback loops, simulated endorphins and dopamine, etc. Is this once real, but now simulated, human mind conscious? If not, why not?
I think even without attempting to closely simulate organic experience in an AI it may be possible for the AI to become conscious on some level, though its experience of consciousness would be very different and alien to our own. I think our human notion of the special quality of consciousness is overrated.
You haven’t supported that assertion at all, though. Why can’t a non-biological machine generate identical electrical or chemical signals to achieve conciousness?
A computer without a program is passive. The program is a parasite that uses the computer to achieve it’s purpose. Program ‘A’ does thing A and program ‘B’ does B things. The computer has no influence on what it does.
We have been here before in this discussion. If the figures in the wax museum are perfect reproductions are they alive.? We as a society are touching that point. Sex mannikins engage in conversation and respond to voice and touch stimulation. They actively participate in sexual intercourse and experience orgasm. Do they?
The problem with the sci-fi trope is the assumption: 1, that the brain is computational and 2. that the information in the brain is in a form that can be transferred to a computer. If you are considering anything a fiction author can imagine then sure it will result in a conscious computer. If we are considering real implementations then no. it won’t work. The computer is just a tireless slave doing what it is told by its’ program. The sex doll is faking it.
Could a conscious system be built using biology as a guide. Perhaps, but it would be a far cry from a sex doll.