The next page in the book of AI evolution is here, powered by GPT 3.5, and I am very, nay, extremely impressed

I was referring back to emulating a single human neuron with a relatively wimpy 1000 artificial neuron network. However, we might need that one for

It’s somewhat strange that the idea that consciousness is not just computation is the one that’s most commonly alleged to have religious overtones, when the strange connection between physical, computational hardware and abstract mathematical entities like numbers is most closely comparable to that between a body and a soul.

But quite apart from that, it’s not such a strange idea to have consciousness depend on non-computable faculties. I have proposed a theory where a mental state, faced with the question of whether to modify itself, is met with a computationally undecidable proposition. Similarly, the (to my knowledge) only framework provably capable of universal problem solving also turns out to be uncomputable. Also, on one of the leading scientific approaches to consciousness, Integrated Information Theory, current computers likewise won’t be conscious, no matter what program they instantiate.

The mind might not work in either of those ways, but it’s at least not obvious that it doesn’t. So why should it be so unthinkable for the mind to not be computable?

There’s a bit of a leap between taking it on faith that current computing architecture is incapable of being the infrastructure for conscious artificial intelligence and your statement here isn’t there?

I’m not sure who you’re claiming is taking what on faith. @Crane referred to the argument I’ve been making with—to some, I’m sure, obnoxious—regularity in these threads; I just wanted to elaborate.

When you sign up for ChatGPT Plus you get a choice of which model to use when you start a new conversation. Two different flavors of GPT-3.5, and GPT-4. When you start up a session with 4, it tells you there’s a usage cap, currently 25 outputs every 3 hours, and that might be revised downward next week depending on demand.

Chat sessions cannot be switched midstream to 4, but now there’s a helpful banner at the top saying which model number it is. If you hit the message cap during a session with 4, you’re offered a chance to switch it to 3.5, but once you accept that, there’s no going back to 4. You can tell which outputs were produced by 3.5 or 4 by the color of the OpenAI avatar next to each output.

Right off the bat, I noticed 4 is much more expressive than 3.5. I copypasted a few different prompts from 3.5 sessions into a new session, and 4 seems definitely smarter. The improvement is not as dramatic as I noticed compared to the difference between pre-lobotomy GPT3-powered AI Dungeon, and ChatGPT 3.5.

In the thread we had a few years ago about AI Dungeon, I posted then about the lobotomy Latitude gave their custom GPT-3 model after they had the moral panic about people producing content that gave them and OpenAI a bad look in the press. I quit paying for AID after that point; it clearly was no longer as smart. Their free version is now downright stupid with a severely restricted output length limit, when before the panic, it was definitely usable. I’m left wondering if anybody not already familiar with AID is somehow impressed enough with the current free version to be willing to pony up money to pay for the bigger more capable model.

Back to ChatGPT. While playing with 4, I noticed a weirdness with the ChatGPT UI that might have been true all along with 3.5; something others might find worth trying to test out. Sometimes when I enter a prompt into ChatGPT it immediately returns an error message, and after that I am unable to continue the current conversation without completely refreshing the page. Once refreshed, it takes a bit to find my way back to that same point in the saved conversation and try again. I got very used to immediately refreshing the page when I got this error, and thus I might have been masking the fact that sometimes, that error was only in the web frontend, and behind the scenes, ChatGPT was happily still producing output and storing it in the chat history tree. When I refreshed, it stopped producing output, leaving it incomplete. I now think it’s good practice to wait long enough for a possible completion to finish behind the scenes, before doing the still-necessary page reload to continue. That might be a relatively long uncertain wait when using the free version, depending on how slow it’s being today.

I’ve read that the GPT-4 model is capable of producing outputs as long as 25000 words, or maybe they mean tokens, but so far I haven’t been able get anywhere near that length out of ChatGPT-4. It seems capped at less than 1000 words or so, and it seems to be a definite cap. When I ask specifically for a long output, I see the output cut off mid-word sometimes.

ChatGPT-4 is definitely able to remember context a lot longer and deeper into a conversation than 3.5. It’s better at remembering defined complex situations and interpreting rules to produce correct results based on its current context.

So far, I think I’m getting my money’s worth. The only disappointments so far is are the caps on number of messages and output length.

I think it’s a good idea now to mention which model version along with any outputs we post in threads.

What is strange there? The school holding that brain meat has magical properties that can’t be replicated is the one holding religious ideas.

There is supposed to be a 32,000 token version.

(Although later they might reveal that they were saying “Tolkein” all along.)

I’d set the conversion rate to be something like 10000 tokens to 1 Tolkein.

(For the uninitiated.)

The difference between analog computers and digital computers is not religious.

Oops. Thought you meant J.R.R. (which i guess indirectly you were…) Abort, abort, too confusing.

True, but what about the converse – where a computer is actually flying a real airplane? There are many different forms and modes of autopilot and autoland systems, auto-throttle systems, and computers that arbitrate between pilot control inputs and the actual aircraft control surfaces. Aren’t those computers just “throwing out numbers”, too? Yet it has vital real-world flight implications.

As already said, and as I’ve previously said myself many times, there is not – and logically cannot be – any distinction.

Excellent points. There is no intelligence in an electronic adding machine, but there arguably is in a grandmaster-level chess champion, in the Watson Jeopardy champion, or in ChatGPT, even though down within the processor chips the logic gates are doing fundamentally the same things. The difference is one of scale and organization, wherein an appropriately organized collection of components performing logical functions, when assembled and organized on a sufficiently large scale, exhibit qualitatively new emergent functions not evident in their individual components.

Not gonna get into this again, but he’s argued that issue by attempting to refute principles of the computational theory of mind that have become widely accepted in cognitive science. At the very least, the statement “brains are not computational” is not something that can be stated as axiomatic, even granting that not everything that goes on in our minds has a computational analog.

And yet you have presented zero evidence that this difference is relevant here, expecting us to take that claim on faith.

Your argument is convincing and I’ve come around to agree with you - at least part way. The observed behavior of a computer is sufficient. If it passes IQ tests then it exhibits intelligence commensurate with it’s score. It is intelligent. Because it passes the intelligence metric defined by expert humans.

However it has demonstrated that the human defined metric can be attained by machine intelligence. It does not require human thought.

Further, machine intelligence may far exceed human intelligence. Since they are two different entities, one does not place a limit on the other. The intelligence metric samples behavior, not method.

Are you saying there is no distinction between analog computers and digital computers?

But, good Babale, I am not the one making the claim that the connection is religious. The only reference to that is in your posts.

Human and machine intelligence are not “different entities” if they are indistinguishable by any test we can devise. If we throw endless Turing tests and Winograd schemas at them and simply cannot tell the difference, we eventually have to conclude that there is no difference.

I didn’t venture any opinion on this, that seems to be something you got into with @Babale. In terms of brain function, I think there’s a pretty fair consensus that higher-level cognitive functions have striking similarities to digital computation (although some researchers – many of whom are not involved in either AI nor cognitive science – still disagree). But it’s also acknowledged that computational theory alone cannot account for all of the brain’s function, and that there are many other things like biochemical processes going on. This has nothing to do with analog computing, though, it’s just processes that don’t fit the computational paradigm.

I was referring to the brain as an electrochemical analog computer as different from a stored program digital computer. The digital computer being computational, that is a serial by instruction calculator. The brain being a parallel analog system that evaluates continuous waves.

Your posts’ contention that human and machine intelligence are indistinguishable doesn’t fall within the intelligence metric or the observable operation of both systems. They both exceed that boundary.

They can achieve intelligence by different methods.

It’s almost certainly the case that physics itself is computable (and the soggy chemistry that the human body runs on definitely is), so worst-case you can just simulate a human brain directly.

That’s not the most efficient way of going about things, but as far as I’m concerned it’s proof enough that there’s no obstacle to a computer intelligence.

All analog computers have noise. And a digital computer can simulate an analog computer to an arbitrary degree of fidelity. Neither one can reach zero error (i.e., storing the value of pi exactly), so however good you make your analog system, the digital system can be made better.