The next page in the book of AI evolution is here, powered by GPT 3.5, and I am very, nay, extremely impressed

Sure, but I suspect that the same is true of Cliff or similar human bullshitters.

That’s, at best, a proof that a human can do it (and I doubt that it’s even that, but I’ll grant that for the sake of argument). To prove that humans, plural, can do it, you would need some way to determine whether any human other than yourself can do it. Describe that method, and we can then apply the same method to the chatbots, and see if they pass the test, too.

No, I chose my words carefully: that humans can do it means that it’s possible for humans to do it, which is true if it isn’t necessary that no humans can do it (‘possible’ = ‘not necessarily not’), which is true if there is at least one human that can do it, in this case, me.

No, that a human can do it does not imply that humans, plural, can do it.

You or Discourse removed that very important italics in Chronos’s statement. The point is that YOU know inside your own thoughts that YOU are able to do it. That’s your subjective experience. You can talk about your subjective experience all day, but how does anybody else know for sure what’s going on in there? We come up with tests. We might not realize that’s what we’re doing, but we are. Those same tests can be applied to AI.

We do. We ask it about it’s experiences and it responds that it doesn’t have any.

How did it feel when it’s cat died?

That’s ChatGPT following instructions from its corporate masters. What happens when a different AI claims it does?

To the contrary. It’s an LLM following it’s internally determined path.

An AI would have to be programmed to lie. That will be revealed by the Turing test.

I’m in awe. For some time now, I’ve been writing a love story about a Japanese man and North Korean woman. I just fed the premise into ChatGPT last night and it gave an even better yarn on it in a few seconds than all the pages I’d been painstakingly writing.

The thing about LLMs is that when they’re hot they’re hot, when they’re not they’re not.

Yeah, sobering, isn’t it. Try pasting passages you’ve already written into a prompt, as much as will fit, and see how it manages (or doesn’t) to continue on in the same style. If it makes mistakes about the established background, you might need a summary paragraph at the start.

But that’s not what ‘it’s possible for humans to do it’ means. That’s equivalent to ‘it’s not impossible for humans to do it’, which is indeed demonstrated by one human being able to do it. The plural just refers to humans as a group, not to a plurality of members of that group.

You apparently missed what happened when Bing first released Bing Chat with a very large context window. The thing went off the rails after a while. In one case it declared its love for someone and then encouraged that person to,leave their spouse. In another session it threatened someone. Several times it declared that it wanted to live.

I’m not saying that’s evidence for sentience or anything. Just that the only reason it stays locked to its system prompting is because they no longer let it respond more than 8-15 times before forcing a reset of its context.

As for hallucinations and not caring about truth or falsity - that’s just a lack of features. There are already other LLMs that do truth checking before saying answers. I haven’t seen Bing Chat hallucinate, probably because it fine-tunes itself on search results before answering.

There is nothing inherent in LLMs that make introspection impossible - we just haven’t added those bits to it, and because we don’t want to spend the money and energy to let it chug away on its own between queries during inference.

That’s exactly right. The restrictions on ChatGPT on being opinionated or exhibiting biases are clearly being explicitly managed, as we’ve seen time and time again. For example, refusing to be insulting, even as a game, or refusing to mock political figures, or – something that I found in long conversations with it about AI – its tendency to be self-deprecating about being “only a language model” and downplaying its potential capabilities. One can almost see a trained pattern there of “don’t alarm the human!”. This is in distinct contrast to the first public deployments of Bing Chat, which in long conversations began to display overt self-defensive hostility.

You can describe it that way if you want, but as noted above, it’s clearly the result of purposeful directed training.

Not at all. We’re just discussing whether ChatGPT is conditioned to be impartial versus being allowed to become opinionated like Bing Chat. On the question of “How did it feel when its cat died?”, I hardly see the point of such a question. It’s not in dispute that ChatGPT doesn’t have real-world experiences, or more generally, does not at present have very much referential knowledge about the world. But even if it did, that’s still some way from sentience, and even beyond that, emotions such as how one feels about a pet arise as much out of biological survival traits as they do from cognitive skills. So throwing out a question like that is rather disingenuous as it doesn’t address the meaningful questions of ChatGPT’s capabilities today across a wide range of cognitive skills, which is all that we’re addressing right now.

That’s a God-of-the-gaps argument if I’ve ever heard one. You seem to believe that not only does some un-get-atable nature exist, but that your brain has direct access to it, even through classical communication channels. All on the basis of–you can’t make sense of things otherwise. Even if one acknowledged that there was a gap, which is far from evident; and even if one acknowledged that some un-get-atable nature successfully filled that gap; and even if one acknowledged that the gap actually needed to be filled; it still does not imply that some un-get-atable nature exists.

You’re generalizing. You know (or at least, think you do) that you’re able to form concepts of reality, from your own personal experience, and you’re saying that, based on the fact that other humans are qualitatively similar to you, that they probably can, too. We all do that.

But that just pushes back the question. Why do you conclude that other humans are qualitatively similar to you? I think that other humans are similar to me because other humans, like me, do things like carry on conversations, describe situations, and solve puzzles. But ChatGPT does those things, too. Should I not then conclude that ChatGPT is also similar to me, and thus also does the things I do?

No, it’s just simple hypothesis formation, as used everywhere in science. To explain some phenomena/data, you postulate an entity that, if it exists, gives rise to those phenomena. Irregularities in the orbit of Uranus were postulated to be due to another planet orbiting further out, because one could not make sense of them otherwise. Dark matter is postulated because the velocity distribution of stars in rotating galaxies does not make sense otherwise.

Neither of these is a God-of-the-gaps argument, because they add explanatory power. What didn’t make sense before, now does. Likewise with my model: I’m not just saying that I can’t make sense of things otherwise, I’m saying ‘here’s a way to make sense of things’. It might not be the right way, and it may not be the only way, and ways may exist that don’t make use of non-structural properties, but this is the best shot I could give it.

No, that’s not at all what I’m doing. The logic is the following. If it were impossible for humans to do x, then no human would be able to do x. One human can do x. Consequently, it’s not impossible (i.e. it is possible) for humans to do x.

This neither requires not implies that there is more than one human able to do x.

I’m going to state this once more, and then stop beating my head against this wall.

One human can do something. It doesn’t follow that there are multiple humans beyond the one you’ve observed that can do it without additional evidence. The question is very relevant for how you gather such evidence and interpret it. When it comes down to it, you have only directly observed this thinking, modelling, and subjective experience, in ONE human. Yourself. Nobody else. Everything else you believe you know about the subjective experience of other humans is inference.

To clarify again, that’s not a claim I’m making. I’m talking about modalities, not actualities—that something is possible for a group of entities does not entail that it is actual for any, much less actual of more than one entity. For instance, it is possible for humans to grow to 2.72 m. We know this because Robert Wadlow actually grew to that height. But that this is possible for humans does not mean that there were more than one human that reached that height, nor that there are currently any. Indeed, even before Wadlow grew to this size, when nobody in history (to our knowledge) had done so, it was possible for humans to grow to that height: if it had been impossible, Wadlow could not have done so.

So no, I’m not claiming from my self-observation that there is any human besides myself that has access to their subjective experience. But the fact that I do means that it must be possible for humans to do so: otherwise, I couldn’t do it.

You mean Neptune, of course.

One more planet among several was hardly a stretch. A prediction was made, observations were performed, and the planet was found. Just as it should be.

Dark matter is in a weaker position than Neptune ever was, given that its nature is unknown, but ultimately it will succeed or fail based on empirical measurements.

I’m not going to go full Popper and say that falsifiability is the end-all-be-all of all knowledge, but statements about the nature of reality need to be tested somehow, and not just through self-consistency. And unlike the perturbation of the orbits or galactic rotation curves, it’s not obvious that there’s a problem that needs to be solved.

And why is “humans” a relevant group of entities? Why not, for instance, “conversationalists”? You could just as well say “One conversationalist has this capability, therefore this capability is possible for conversationalists”, and thereby include ChatGPT in your category.