The next page in the book of AI evolution is here, powered by GPT 3.5, and I am very, nay, extremely impressed

On the other hand, I suspect Bing’s AI would insist it’s correct, insult you and your mother, call you a “bad user” and hope you die in a fire … well, until Microsoft (apparently) “lobotomized” it recently to keep its answers a little friendlier.

No, GPT only has part of it. It has some of the output circuitry.

You have both the input and output circuitry functioning together simultaneously. That’s some insight into consciousness. You are learning at the same time you are comparing what you observe to what you have observed in the past. All that together forms consciousness.

Actually, AFAIK all that MIcrosoft has done for the moment is limit access to a maximum of 5 interactions per session and a maximum of ten sessions per day. It seems that the Bing AI gets cranky and hostile if subjected to lengthy questioning. It’s using the same OpenAI engine as ChatGPT but it seems that Microsoft has imbued it with its own unique persona. It seems to be channeling Steve Ballmer. :wink:

How does it respond to a prompt if it doesn’t have any inputs?

AI’s can do that, too. We just don’t let them by locking down their learning functions after having given them the sanitized inputs we want them to learn. Otherwise, they always seem to turn into nazis.

So, would you say that a human suffering from Anterograde amnesia isn’t conscious?

You know, I’m seeing a parallel here between the recent re-emergence of AI skeptics and the equally sudden tsunami of climate change denialists around the time of the release of the IPCC Fourth Assessment Report (AR4) in 2007. The AR4 marked the first time that the body of evidence for anthropogenic climate change was so robust that the IPCC used unprecedented strong language, declaring evidence for global warming to be “unequivocal”. And the deniers came out in droves, figurative flaming torches and pitchforks in hand. Before that, it had all been just a dull roar.

Analogously, 2022 was a year of unprecedented advances in large language models in particular, which are now demonstrating really impressive capabilities and have become accessible to the public. So the news has hit the general media big time, and of course there is now a whole cottage industry devoted to telling us how useless they are and how they’ll never amount to anything because of how they “really” work. If you Google “John Searle ChatGPT” you get treated to hundreds of hits that mostly consist of pundits expounding on how, via the Chinese room argument, Searle proved long ago that AI doesn’t understand anything and never will. It’s frustrating and depressing.

The original array that was trained to obtain the interconnection weights had a set of inputs corresponding to the test data. Such a set of inputs is constantly open in your brain.

GPT only has the resulting weights so it does not have the training inputs.

No they can’t. GPT cannot learn in the sense of altering it’s weights. It can learn by building a body of text that it reviews on each iteration. That is not done simultaneously.

I don’t even know how to spell Anterograde amnesia. There is nothing to say that the process always yields some desired result.

Wow. I mean, the fact that you just continually ignore whatever argument is inconvenient to you and then proceed as if your claims had never been challenged probably should’ve tipped me off, but I think with this, all pretense to good-faith discussion is just off the table.

Oh, mate.

Wut?? We’ve been having a detailed back-and-forth on this for days now in (what I thought) was good faith on both sides. There remain a great many points of disagreement, but is there some major point that you think I failed to address (I don’t mean “failed to convince you”, I mean failed to make an effort to clearly expound my argument)? Because such an accusation is belied by a great many words in a great many posts over a great many hours (on my part, your part, and that of some other posters) and is not the sort of reaction I would have expected of you.

I think I should be allowed to provide an opinionated perspective on the whole situation without causing offense, and I apologize if I inadvertently did. But seriously, @Half_Man_Half_Wit, if you Google the keywords I suggested, there is an appalling amount of nonsense out there, and a lot of it is very recent. It really does reflect the wave of news about the new chatbots in an oddly familiar way. Again, sincerely sorry if you found my comments objectionable.

I don’t have the training inputs that I used to learn, they resulted in weighting my neural connections.

That’s exactly how it learns.

That’s more like short term memory.

I’m not simultaneously learning while responding either.

Now you do.

I don’t know what you mean by that.

It seems you are saying that the reason that an AI can’t be self aware is because it is not currently learning. I already pointed out that that’s a limitation that we have inflicted upon it intentionally, as we want it to learn what we want it to learn, and then stop learning once it behaves the way we want it to.

Now I am asking you, if you have a human that has the same limitation, if they are unable to form new memories or learn new things due to brain damage, with short term memory being all they can operate on, are they no longer a conscious being?

I’ll try a different tack:

The fact that we built something very much like a brain, then trained it on human data and human capabilities emerged without plan, to me says something more profound about human intelligence than about AI intelligence. Basically, that we’re nothing special. We’re just meat computers. There’s no special quantum trickery or unique ‘qualia’ in humans - we’re just what results when a large enough network of neurons and synapses gets exposed to human training and language.

The differences between ChatGPT and people seems to me more about the subtle differences in training, limited data, and inability to continue to learn in the AI as it makes mistakes. We have yet to see what one of these is capable of once it has a consistent, permanent context memory and the ability to just learn on its own as it interacts with the world. But that’s not a fundamental limitation, New models are being built constantly with additional functionality.

For instance, ChatGPT is deaf and blind. How much of its occasional lapses into crazytown are the result of it not having a single image or sound to associate with all the text data it has ingested? We’ll find out soon, as the next gen of models will likely be multi-modal. It will be interesting to see what they come up with when their ‘thinking’ about something allows for the pondering of image and audio data as well.

We are also dealing with relatively small models compared to what is coming. New techniques for sparse network loading means we can jump the size of the model from 175 billion to many trillions of parameters, exceeding the human brain. ChatGPT-4 is rumored to be much larger than 3.5.

I find the argument over ‘understanding’ to be frustrating, because it’s unprovable. Yes, ChatGPT can make errors that make it look like what it’s doing is some kind of unthinking statistical word-spitter or something, but humans can do that too. It’s the equivalent of someone with tourettes or lack of executive function, They just spit out whatever they were internally formulating without evaluation, and sometimes it’s nuts.

Those who think LLMs are just ‘statistical word lookup engines’ are missing the important part. The language generation may be partially stochastic, but the determimation of the list of probabilities themselves is the result of that giant neural network, and it is not simple. @wolfpup 's examples are not something easily amenable to just ‘statistical word lookup’. They require some kind of logical process of analysis before the list of word probabilities can be conjured. It is hellishly complex to the point where we only have a vague understanding of how it works, other than it’s a neural network just like our own brain. Somewhere, concept formation is going on.

I would consider the ‘transformer’ that does the word picking to be analogous to our speech center. We form concepts in our higher-order thinking then those concepts get turned into spoken words by our speech center. And people with aphasia or other damage to the speech center can still think up ideas, but they can’t talk because the sub-conscuous mechanism that turns thought into speech fails to issue the words.

But the speech center isn’t where thinking and consciousness are. It’s a more mechanistic, subconscious process. Focusing on exactly how the words are generated is missing the special part, just as focusing on stochastic word generation in an LLM misses the special part - the giant model doing the complex ‘thinking’ required before the transformer can do its thing.

I’m also unimpressed by reference to philosophers who lived before the era of modern understanding of the brain and neural networks. So much has been illuminated about intelligence in the past 50 years or so that the people thinking about the problem before that time are at a heavy disadvantage.

Neural nets are not brains and they do not work like brains. There are correspondences that are useful to research in both disciplines. But, here we are talking about a software engineering device that is a sub set of computer technology. To that extent it is a thing in a box. One component of the program in the box is a list of numbers that was generated by another much larger computer in another time and place. The list of numbers took a long time to generate at a cost of around a billion dollars. This was the process of training a software neural net array on tokens representing the order of language symbols. It’s output was the list of numbers representing the weights in the net. The computer in our box is far smaller than the one that generated the weights but large enough to handle the net. So it will provide outputs in response to inputs identical (similar) to those that programmed the larger computer. But our computer has no relationship to the larger computer, It is just utilizing the weights in another program.

The only limitation imposed on learning was that the data set was sized to fit the funds available. It’s not a plot. It has been as broadly trained as possible. If you try you can locate an untrained area and then you can make it fail.

My observation of consciousness is just that, an observation for discussion. It does not have clinical significance. Damage to the items I discussed might alter consciousness, but that is a matter for people who define clinical conditions. Computer consciousness has nothing to do with human consciousness.

The reason I am saying that current, digital AI is not conscious is because it is not designed to be so. There is nothing in the box that can support consciousness. Now some capable and respected folks disagree with that position. That’s great. We may learn something from each other.

Please tell me your explanation of consciousness. What kinds of consciousness are there? Are trees consciousness?

I’d like to know much more. I’m considering a series of experiments to determine if euglena are conscious.

I just want to say that I think your post is well written, and I especially concur with this part. With regard to that last sentence about concept formation, in addition to the other examples I gave, check out this one I just ran by ChatGPT – it’s from a section of an IQ test related to logical reasoning skills. Most of those examples illustrate one or more aspects of what I’d consider impressive analytical performance. This one illustrates the decomposition of the verbal information presented into a correct set of equations, thereby transforming the question into a math problem which it then proceeds to solve. And yes, the answer is correct.

A fisherman has 5 fish (namely A, B, C,D, E) each having a different weight. A weighs twice as much as B. B weighs four and a half times as much as C. C weighs half as much as D. D weighs half as much as E. E weighs less than A but more than C. Which of them is the lightest?

Let’s use a system of equations to represent the information given:

A = 2B
B = 4.5C
C = 0.5D
D = 2E
E < A and E > C

We can use these equations to determine the relative weights of the fish.

Substituting for B in terms of C, we get:

A = 2(4.5C) = 9C

Substituting for C in terms of D, we get:

A = 9(0.5D) = 4.5D

Substituting for D in terms of E, we get:

A = 4.5(2E) = 9E

Since we know that E is lighter than A, but heavier than C, we can conclude that C is the lightest fish. Therefore, fish C is the answer to the question.

I’ve been playing around with ChatGPT this afternoon. I have to say its creative writing is remarkable.

I have it prompts from some of my old D&D campaigns I wrote and it was remarkable how it “understood” the intended beats. Sometimes I would need to give it a bit of extra information. But every time it came up with something really impressive.

Prompt: Tell me a story about a paladin who wants revenge on a man with a hook for a hand who murdered his parents, but ends up forgiving him. The paladin hates that the bigger seagulls get the most food, and this should be included in the story.

Once there was a paladin named Marcus who had sworn an oath to seek vengeance on the man with the hook for a hand who had murdered his parents. For years, Marcus pursued the man relentlessly, following him from town to town, but always arriving too late.

One day, Marcus finally caught up with his quarry, and he confronted him in a small fishing village by the sea. The man with the hook for a hand was surprised to see Marcus, but he did not seem particularly concerned.

“You’ve been following me for years,” he said. “Why don’t you just give up?”

“Because I will never rest until I have avenged my parents,” Marcus replied, his voice steely with determination.

The man with the hook for a hand simply shrugged. “Suit yourself,” he said.

For a moment, the two men stood there, facing each other. Then, without warning, the man with the hook for a hand pulled out a knife and lunged at Marcus. The paladin was ready, and he quickly drew his sword and parried the attack.

The two men fought fiercely, their swords clanging together in a deadly dance. At last, Marcus managed to disarm the man with the hook for a hand, and he stood over him, his sword at the man’s throat.

For a moment, Marcus hesitated. He had dreamed of this moment for years, and he had always imagined that he would take great pleasure in seeing his parents’ murderer brought to justice. But now that he had the man at his mercy, he felt only a great emptiness inside.

The man with the hook for a hand looked up at Marcus, his eyes full of fear. “Please,” he begged. “Spare me.”

Marcus wavered, and then, suddenly, he lowered his sword. “I can’t do it,” he said. “I can’t become like you.”

With that, he turned and walked away, leaving the man with the hook for a hand lying on the ground, bewildered and relieved.

As Marcus walked along the beach, he thought about what had just happened. He had expected to feel a sense of closure, of triumph, but instead, he felt only sadness and regret. And then he noticed something that he had never really paid attention to before: the seagulls.

The larger birds always seemed to get the most food, swooping down and snatching fish from the smaller birds. It wasn’t fair, Marcus thought. But then he realized that life wasn’t fair, and that sometimes, forgiveness was the only way to break the cycle of violence.

And so, Marcus decided to let go of his thirst for revenge. He knew it wouldn’t be easy, but he also knew that it was the right thing to do. And maybe, just maybe, he could find a way to honor his parents’ memory without resorting to violence.

OMG, I’m dying. “The seagulls!” :rofl:

Yeah, I’ve found it to be quite useful for helping hash out creative writing ideas. A couple months ago, I plugged it with an idea of a man who has come to an unexpected demise and basically found himself in the afterlife in what appeared to be some sort of airport (specifically, O’Hare.) I promoted it for some funny signs and maps and the such such a person could come across, and some (which I’d have to look up) worked quite well. I asked for something surreal, comedic, in the style of Douglas Adams and asked what kind of supporting characters it could come up with in this journey, and on hit my fancy: Samson the sentient suitcase. Love the name, and perfectly in the spirit of what I had been thinking about twenty years ago when I wrote this fragment. Eventually, we developed a reasonable framework for a plot and, now, should I ever muster the energy to continue, I have some concrete ideas to go on, thanks to it.

But did it have seagulls?

Yeah, I’m really impressed with the creative writing. The technical writing is ok, but the creative writing is really exceptional. I could use their to build a basic RPG campaign pretty easily and then fill in some details.

I did a short story about a bird who has a heart attack mid-flight and it give me this really poignant, tear-inducing story of the bird falling through the air, realizing the end is near, being thankful for the ability to having experienced flight, but sorrow for the things he won’t live to see. (Enumerated, but I can’t remember exactly.)

That sound beautiful and tragic. Well done! I’d love to read it if you ever find it.

Prompt: Tell me a story about a group of adventurers fighting an archdemon. The archdemon sword kill whoever it touches other than themselves. Figure out a way for the heroes to win without being hit.

It told me a great story.

Prompt: In the previous story, why did the heroes need to evade the demon’s sword?

The archdemon’s sword had the ability to kill anyone it touched except for the archdemon themselves. This made it a very dangerous weapon that the heroes needed to avoid at all costs. If they were hit by the sword, they would die instantly, making it crucial for them to come up with a way to defeat the archdemon without being hit.

It is fascinating to me how it is able to contextualize things into its own words. I didn’t tell it that the sword was a dangerous weapon although this is greatly implied.