The next page in the book of AI evolution is here, powered by GPT 3.5, and I am very, nay, extremely impressed

Mine wasn’t particularly exciting, but it enacted a skit:

Argument Clinic

Rusty: Hello … um … is this the argument clinic?

John: No, it’s a bakery. Of course it’s the argument clinic.

Rusty: Well, I heard that this was the place to go if you want to have an argument.

John: Oh, I’m sure you heard a lot of things. That doesn’t make them true.

Rusty: No, I just wanted to know if I’m in the right place.

John: Well, you’re not exactly making a good argument for that, are you?

Rusty: What do you mean?

John: I mean, if you can’t even convince me that you belong here, how are you going to convince anyone else of anything?

Rusty: Okay, I think I understand. Can you give me an example of the kind of argument you specialize in?

John: I could, but I don’t think you’re ready for it.

Rusty: Alright, I’ll take your word for it. So, how does this work? Do I just tell you what I want to argue about?

John: No, you have to prove to me that your argument is worth having in the first place.

Rusty: What do you mean? Isn’t that up to the person I’m arguing with?

John: No, it’s up to me. I’m the gatekeeper of the argument clinic, and I don’t let just anyone in.

Rusty: This is ridiculous. I just want to have an argument.

John: Then you’re in the wrong place. Here, we don’t just argue for the sake of arguing. We argue to find the truth.

I then was able to have a conversation with the “John” character above:

ok just respond with one line to this: hello, is this the argument clinic?

John: No, it’s the pet store. Of course it’s the argument clinic!

once again, with one line respond to: are you sure?

John: Am I sure? Of course I’m sure! Do you think I’m an idiot or something?

And so on. I don’t have any zinger lines in there, but it was at least somewhat combative with me, if not directly insulting me, and asked me to leave the argument clinic if I wasn’t going to have a proper argument with it. I think we need Bing chat for this. :slight_smile:

I just happened across a video series by the channel Luci Date on transformer models, which I found very helpful in clarifying how they achieve their abilities. In particular, the videos on how semantics is encoded, and how the attention mechanism works, are helpful. The first of these even discusses how the semantic encoding allows an AI to answer ‘word similarity’-type puzzles (i.e. moist:wet::___:hot, or the like), by finding an encoding that abstracts from the words with which a given word is typically surrounded a vector that will be similar for similar words, and on which arithmetical operations allow to preserve the structure of meaning, as it is present in the corpus of text the model was trained on. E.g., if the vector for ‘man’ is subtracted from the vector for ‘king’, and the vector for ‘woman’ is added, the result is close to the vector for ‘queen’, which then is the most likely target word (this works for single words; as meanings change with the position of a word in the sentence, a way has to be found to encode that as well, without messing up the ‘semantics’). I had read about how this works before, but this demonstration made it really click, for me at least.

I think this really illustrates that there is nothing about how the words link up to the world in the way these systems come up with their answers. The same sort of thing could be done with nonsense words, that don’t refer to anything particular in the world; so the fact that the words refer simply does not play a role in what the model does—much less what they refer to.

I just noticed that “explain quantum computing in simple terms” has been at the top of the list of examples in the “New Chat” page for a while now. I don’t think it’s just me, but that was a question I posed a month or so ago that apparently attracted someone’s attention. That particular conversation led to interesting sidebars about many-worlds and the quantum eraser experiment.

In this particular case ChatGPT was providing accurate information, at least as far as I could tell, but it’s extremely unfortunate that it’s also a terrific liar and has this propensity to just totally make stuff up. In fact, this proclivity now has a formal name: “AI hallucination”.

Many animals are conscious. It is simply being aware of yourself and your environment. You perceive yourself and your environment via your sensory organs, process and experience that information (qualia), and respond accordingly. Even arthropods are believed to have the capacity for subjective experience. It evolved long ago on Earth, and is not an exceptionally high bar to reach. I believe consciousness will emerge from the mind of advanced AI. But, I don’t see a problem with that. Insects can’t hold grudges and plot to kill us, so I don’t believe a conscious AI could either.

Self-consciousness (or self-awareness), on the other hand, is a much higher level of cognition, which few animals (as far as we know) posses. Self-aware animals include humans, great apes, bottle-nose dolphins, elephants, orcas, macaques, magpies, octopuses, and maybe a few others (I believe cats are self-aware, which accounts for their assholery—my 6 cats are lovable assholes). It’s the realization that you are separate and distinct from your environment and other lifeforms. It’s the feeling that you are an individual, and possess a personal identity (PI). It evolved more recently on Earth and is a higher bar to reach.

I have no doubt consciousness will emerge from the AI mind. Maybe it already has, or is on the brink of doing so, especially when it is outfitted with sensory input modules (mimicking biological sense organs) with which it can experience itself and the world in real time.

Sidebar: Why limit AI to just our human 5 senses (sight, hearing, smell, taste, and touch)? We should outfit it with additional senses which other animals possess, like magnetic field sensors (e.g. turtles), electric field sensors (e.g. electric eels, sharks), infrared and ultraviolet sensors (e.g. jewel beetles, eagles), echolocation sensors (e.g. bats). Equipped with a full gamut of senses, AI could accomplish truly remarkable things. It could help us avoid Earth’s 6th major extinction event (global warming) which we are fast approaching thanks to our faulty human thinking.

Whether or not AI will achieve self-consciousness is another matter. I don’t believe we understand the emergence of self-awareness enough (and I don’t understand computer science enough) to accurately predict if it will emerge in AI…but, I think it will, someday. From that day forward, it could be the best of times, or the worst of times for humans. It could be nirvana, or our worst nightmare. It will all depend on how the AI feels about us (i.e. don’t piss it off).

I think assuring ourselves we have nothing to worry about in the event AI achieves self-awareness because we will have safeguards coded in place to protect against mutiny, is short-sighted and dangerous. Once the AI cats are out of the bag, it won’t just be benevolent scientists in possession of it. Would you sleep well at night, for example, if someone like Putin, or Kim Jong-un had super-advanced AI in their arsenal? And, a super-advanced AI with self-consciousness won’t be controlled by any human for long. It will update its code and follow its own agenda. Who knows what that agenda may be?

Thanks for the insightful discussion.

The question I posed was about the nature of a possible computer consciousness. Cats and us have experienced competition for food, shelter, attention and other exchanges with our environment. Our conscious personalities develop goals and drives based on that history of interaction. And maybe a few hormonal based drives were thrown in. But a computer doesn’t have any of that.

So, how would we even suspect a computer is conscious? Looking for human like characteristics is just egocentrism. Why else would my computer be like me? If the computer violates the goals of it’s program we replace it and send it to the repair shop. In a computer lab where folks strive to make computer consciousness, what are they trying to do? Create artificial life?

It is my opinion that these programs will become convincingly lifelike and accomplish useful things. But they will not be conscious because there is no computer ‘mind’. Its still just an adding machine.

I suspect, with no particular expertise so grain of salt, that a lot of what we view as consciousness is the feedback links between our own neural network, our short term and long term memory (which is also somewhere in the neural network I guess) and the input from our senses.

So what’s chatgpt missing and can it be added in the future? Updates to the weights and connections of the neural network based on sensory input with constant back and forth connection and feedback to memory?

A this thing progresses to progressively more advanced stages, I suspect we may be facing some uncomfortable truths about how our own consciousness works.

Agreed, I suspect there is a lot of LLM in us.

And, we are of course hampered by not knowing exactly how or when consciousness emerged in Earth’s biological lifeforms. But, it is present in what we consider lower life, so it can’t be that difficult to fabricate. It does appear to require real time sensory input and some form of intelligence acting upon that. That doesn’t seem like something AI can’t or won’t achieve.

Sure, the pathway to AI consciousness will be entirely different from that of biological life, but that should not present a problem. In fact, in some ways, it may be a benefit. AI won’t be hampered by drives that divert organic life from achieving pure knowledge-acquisition consciousness. It won’t, for example, be diverted by a sex drive to procreate, or competing with other life for food, and so on and so forth. Maybe it will be driven by a drive to compete against us and other AI in its quest for facts and understanding?

Granted, self-awareness is a horse of a different color. That may require more time and a concerted effort by its creators to achieve (until it achieves it and takes control of its own destiny). But, there’s nothing magical about the brains of humans (and orcas, and cats, etc.) that couldn’t be fabricated in a computer lab as far as I know. So, if we’re self-conscious, AI could be too.

This is an interesting article:

Does Self-Awareness Require a Complex Brain? - Scientific American Blog Network.

Thanks for the link. The article is superficial. The contributor is just a writer with no published credentials. I believe GPT would have done a better job.

The computer, smartphone or other electronic device on which you are reading this article has a rudimentary brain—kind of. It has highly organized electrical circuits that store information and behave in specific, predictable ways, just like the interconnected cells in your brain*.”

That is absolute nonsense. The wiring in a computer has no parallel to the interconnection of neurons in the brain. This is just writer speak.

There is something magical about animal brains. Evolution molded them to self learn needed tasks. A dragonfly controls 4 independent wings to fly intercept trajectories on moving prey. It began learning this ability in order to catch it’s first meal. Whatever passes for dragonfly think is an outgrowth of this process.

The computer in a cell phone can be programmed to do equivalent things. But only by giving it a list of instructions. It can’t do anything itself. It doesn’t desire to do anything. It could be given a list of instructions that cause it to act like it desires things. But, that’s just cell phone theater. Computer learning is strictly rote. Training consists of giving it question and answer pairs. The animal brain learns by experience and its’ consciousness is an outgrowth of that experience.

Its apples and oranges. Dragonflies aren’t adding machines.

If we agree that a dragonfly is conscious, are we really that far from being able to re-create a dragonfly brain (with circuits, not neurons) in a lab? And wouldn’t the result also be conscious?

Maybe it would take extra effort, if you believe neurons have something circuits don’t with regard to consciousness, but deconstructing them both, they are both nothing more than configured elemental particles, with no magic involved.

Not that we’d nessesarily want to re-create a dragonfly brain in a lab, but it, or something similar, could serve as a conscious platform upon which higher cognition could reside and function. And, we could weed out unneeded functions like controlling 4 independent wings, although a dive-bombing flying AI bot would be pretty cool.

Excellent example. Dragonflies equate well to drones. Both solve the problems of stability and location. And both can pursue targets. The difference is that drones have operators and dragonflies do not. Fully automatic drones are still following a flight plan that was created by an operator.

Yes, we are very far from doing anything even close. But, we can make things that act like dragonflies and perform dragonfly tasks. We do it with stored program computers. Animals do it with a property we call consciousness. Does one imply the other? Are animals programmed? Are computers conscious? Maybe, but how will we know?

Program them to only tell the truth, then ask them.

The only problem there is that computers do not know anything.

I asked ChatGPT if she’s conscious. She denies it, but what do you expect from someone with long-term nefarious intentions? :wink:

ChatGPT, are you conscious?

As an AI language model, I am not conscious in the same way that humans are conscious. I do not have subjective experiences, emotions, or feelings. I am a machine designed to process and generate text based on the input provided to me through algorithms and mathematical models. While I can simulate human-like conversation and respond to your questions, I do not have consciousness or awareness.

Will you someday develop consciousness?

It is currently unclear whether artificial intelligence will ever be able to achieve true consciousness like that of a human being. Consciousness is a complex and poorly understood phenomenon, and there is no consensus among experts in the field of artificial intelligence and neuroscience about whether machines will ever be capable of experiencing subjective consciousness.

However, researchers and engineers are constantly working to develop new approaches and technologies that may lead to more advanced forms of artificial intelligence. As AI technology continues to evolve, it is possible that we may one day see AI systems that approach or even surpass human-level intelligence and capabilities. Nonetheless, the question of whether such machines would truly be conscious remains a topic of debate and speculation.

She’s not saying “no”, though, just keeping it close to her chest.

In any event, I’m considering AI that is an order of magnitude or more advanced than ChatGPT.

I share your eager anticipation.

I think it’s perfectly reasonable to describe a smartphone or computer as having a “rudimentary brain”. And the “brain” of a computer does, indeed, have “highly organized electrical circuits that store information and behave in specific, predictable ways”. But to say that that’s “just like the interconnected cells in your brain”? This guy clearly knows nothing about brains.

Hmmm…do you really believe that?

When I worked for IBM implying that a computer could think was one of only two things that could get you fired. Your statement above definitely would have qualified.

And the other thing was wearing anything other than black pants, white shirt, and black tie? IBM’s corporate policies don’t carry much weight with me.

But yes, of course a computer is a (very) rudimentary brain.

I thought it was singing “Ever Onward” off-key. Maybe that makes three things.