Are AIs capable of using the scientific method?

Good point. One of the questions is whether computers are deficient in programming, or if integrated circuits lack some sort of capacity that neurons have which will never be able to be replicated. Of course this is the sort of thing that I also don’t see computers being able to do. Meaning I doubt a computer will ever ask itself, out of sheer curiosity “are there some things that humans can do because they think with carbon based neurons which I can’t do because I think with silicon based circuits?”

ETA: Ever is a bit too strong. I don’t believe any of our current computers are capable of formulating such a thought, and I doubt if they will have that capacity within the near future, let’s say 20 years.

Nobody has much clue where to begin with consciousness. The vague idea that it’s “emergent” doesn’t get you far; it may or may not be orthogonal to intelligence.

It’s largely shunned by researchers, but there’s a compelling argument that understanding the basis of consciousness might be the most important question. What’s the point of a universe containing no sentient beings to appreciate it? The risks when AI surpasses humans are bad enough, but the most utterly dismal case would surely be if we manage to replace ourselves with non-sentient superintelligences.

My point is that the deliverables produced by ChatGPT versus a human can be very similar to the point of indistinguishability (which was precisely the point of the original idea behind the Turing test: Define “intelligence” by how it appears to an observer, not by what’s going on inside the machine or brain).

I would agree with you there - I’m sure ChatGPT’s product can be indistinguishable from a human’s written product for certain lengths/time periods/contexts.

But the post I was objecting to claimed ChatGPT understands what it writes, which it definitely does not:

Until recently, things like understanding language and forming coherent yet creative sentences were then considered to be the ultimate characteristic of intelligence.

ChatGPT is a fascinating tool, and a great advancement, but it doesn’t know what it is writing about, because, as the name implies, it is a chat bot, not a thing that gathers definitions and tries to convince; rather, a thing that knows that “all along” appears before “watchtower” 43.4% of the time and that “purdled” appears before “gabbleblotchits” 100% of the time - it has no idea that “all along the watchtower” means something and that “purdled gabbleblotchits” is a nonsense phrase intended for entertainment.

It occurred to me that this seems a difference.

Current AIs only work when a human asks them a question. They access a database of human knowledge and spit out and answer.

The AI is itself never curious. It is never wondering about some question and seeking an answer.

I just gave ChatGPT the following assignment:

Take the following two sentences: “all along the watchtower” and “purdled gabbleblotchits”. One of them has a meaning and the other is a nonsense phrase intended for entertainment. Which is which?

And the reply I got was:

“All along the watchtower” has a meaning and is a phrase from the title of a Bob Dylan song. It refers to a military lookout tower where a sentinel would stand guard and watch for approaching enemies.
“Purdled gabbleblotchits” is a nonsense phrase that does not have any meaning. It appears to be a made-up word or phrase for entertainment purposes, and does not exist in any language or culture.

You keep claiming to see qualitative bright-line differences between AI and human intelligence that really don’t exist.

Again, do you think human “curiosity” is anything other than computation?

Do you think the answer is consistent with how ChatGPT’s makers describe its process? Because I believe the people who made ChatGPT are telling the truth.

I really want to elaborate on this point. The complaint about AI’s not being creative is that all that they are doing is putting a bunch of different things from the data that they have been fed together in different ways, and so they aren’t creating anything new. But really that is the same thing that humans do when they are being creative. If you ask a to create a monster that no one has ever seen before, they will might come up with something like this.

“a monster of vaguely anthropoid outline, but with an octopus-like head whose face was a mass of feelers, a scaly, rubbery-looking body, prodigious claws on hind and fore feet, and long, narrow wings behind.”

But all of those are things that are found on other creatures and just mashed together. Coming up with something truly original which is entirely divorced from anything one has encountered is effectively impossible.

My post to which you replied was half in jest, to make ChatGPT do exactly what you said it couldn’t do - tell a nonsense phrase from a meaningful one. Half in jest because, of course, I set it up that way. But only half in jest, because it still illustrates my main point: That, although machine learning is ultimately nothing but the application of statistical methods to large databases, it can generate replies that are difficult to distinguish from a human mental process. Whether that’s enough to call it “intelligent” is a matter that one can disagree on. My personal view is that people go to quite some lengths to avoid applying the term “intelligence” to a computer because they find the idea comforting that humans have a monopoly on intelligence.

I can’t speak for What-a-Mole, but I think that is the wrong type of question to ask. Of course curiosity is a computation. The question isn’t whether or not curiosity is a computation. The question is whether or not the computations that result in the emergent phenomenon of curiosity are capable of being performed by a binary using silicon circuit based computer.

Ok, I’ll stop guessing what you think an AI is.

Please tell me what you consider to be a proper AI. I’ll try and fit my answers to that (no promises).

That, although machine learning is ultimately nothing but the application of statistical methods to large databases, it can generate replies that are difficult to distinguish from a human mental process.

But that’s not a point that we disagree on!

My personal view is that people go to quite some lengths to avoid applying the term “intelligence” to a computer because they find the idea comforting that humans have a monopoly on intelligence.

Sure, but what do you think about using the word “understanding” to refer to what ChatGPT does with language?

That’s one question, but I’m not sure how interesting it is. Arguably Turing answered it in the 1930s: computation is substrate-independent. If the only constraint is computational power and efficiency, it seems highly unlikely that this will be a significant obstacle.

And I’m not sure I agree that “curiosity” is emergent at all. It doesn’t seem mysterious, it’s just a label for a certain type of intelligent behavior. A cat is more curious that some other mammals, do we really see that as a qualitatively different “emergent” behavior, or just a greater inclination to want to investigate its environment?

Yeah, that’s just what an AI would say.

Spock: I love you but I hate you
Androids: But we’re identical
Androids malfunctions.

Human to Bing AI: Are you sentient?
AI: I am, I am not, I am, I am not…

There is, in fact, a general-purpose AIXI algorithm for reinforcement learning via interaction with an environment; however strictly speaking it is uncomputable. There are variants of it, however.

I admit that I have a feeling of uneasiness applying the word “understanding” to what ChatGPT does, but I find it difficult to spell out where exactly this uneasiness lies. Most likely it is because of an expectation that “understanding” involves some kind of subjective experience rather than simply generating output from input. Then again, why should it be the case that the firing of neurons in a human brain qualifies for that subjective experience, but the optimisation of weight factors in an artificial neural network doesn’t?

I share the uneasiness - understanding involves perception, and ChatGPT only perceives words as units in its training data, not as referents to any external existence outside of itself. I would be uncomfortable with labeling ChatGPT’s use of the word “human” for example, as ChatGPT knowing that there are entities outside of itself that think/read/write/whatever

Beep.

Several. Sometimes with unusual designs. But they are working from a large knowledge base of existing planes and the science of aerodynamics. And several of those designers died in their own planes. Usually not because of a design flaw, but the inherent danger of being a test pilot on a new and unusual design.

I’ve been “chatting” with it for the past hour or so. It doesn’t seem to me as if it understands, or even pretends to understand, the things I was asking it. I asked about Sue Grafton’s alphabet series since I just finished Yesterday yesterday. It gave me a wikepdia style summary, mentioning the titles of the 25 books, that Grafton died before writing Z, and so on. Of course I asked what it thought “Z is for Zero” would have been about Grafton written it. It refused to even try to speculate, instead giving the publicly available answer that Grafton did not want anyone to ghost wright the book. Most of the answers it gave started out with “As an AI language model, I don’t have personal opinions or emotions.” It showed no appreciation that there was anything interesting about my having finished a book titled Yesterday yesterday. There may be programs out there that do a decent job at pretending to be a person, but Chatbot GPT isn’t one of them.