The next page in the book of AI evolution is here, powered by GPT 3.5, and I am very, nay, extremely impressed

I found this TWitter thread really interesting in terms of what ChatGPT does and doesn’t understand:

Almost everything in the above image is nonsense. The paper does not exist, and therefore was not written by North and Thomas and was not published in the Journal of Economic History in 1969 or any other year. As it does not exist, it does not present a theory of how economic institutions evolve over time and how they affect economic performance. The non-existent paper has not been cited one time, let alone over 30,000. It is not considered by anyone to be anything other than non-existent. The entirely fictitious paper has contributed nothing to our understanding of literally anything.

So, why has ChatGPT given this response? Because it is trying to produce text which looks like the kind of answer typically given to this kind of question, using terms that occur frequently in its training data.

(The full thread will give more detail)

So, if ChatGPT understands stuff in the sense that it relates symbols to entities in the world:

To what entity in the world does it relate the text string ‘“A Theory of Economic History” by Douglas North and Robert Thomas, published in the Journal of Economic History in 1969’?
To what entities in the world does it relate the text string “cited over 30,000 times by Google Scholar”?
To what entity in the world does it relate the text string “Google Scholar”?
To what entity in the world does it relate the text string “economics paper”?
To what entity in the world does it relate the text string “the field of economic history”?
To what entity in the world does it relate the text string “1969”?

It doesn’t know or care what any of these things are. It just produces words that go well together. Sometimes - amazingly, impressively often - these words do in fact correspond with the real world. But that is not because of any understanding on ChatGPTs part. It doesn’t try to related either the question or its answer to anything other than themselves - a series of symbols that it puts together in ways that minimise a loss function.

Put pithily here by Neil Gaiman, in response to ChatGPT producing what looks like a detailed textual analysis of the Sandman series but in which it refers to Sandman text which simply doesn’t exist.

He didn’t. He used your exact words to show that you are pushing an unfalsifiable theory using no evidence. Just like you keep saying everyone else is. Have fun, then.

Somewhere near the beginning of this thread someone compared Chat GPT to Cliff Claven from Cheers in its ability to completely fabricate BS when it doesn’t "know "an answer. This behavior has been observed from the beginning and is why I tell people to be very careful with asking questions of fact or Googlable stuff into it: that’s not what it is good at (yet). Sometimes it will hit the answer; often it will just make up complete BS.

Do you have a suggestion for how one would go about testing for a difference between an advanced AI model with vision and audio capabilities as well as language processing - like PALM-E - to determine if it possesses conciousness?

What physical processes do these non-structural features arise from? How can we test whether they exist or not?

If you aren’t able or willing to do that work, and you just keep saying “well, based on a philosophical framework I made up (that some philosophers agree with, and I’ll quote them, and others disagree with, and I’ll just decide they are obviously wrong morons) shows that it is impossible for conciousness to arise in an AI” without providing any evidence or methods to test whether your claims are true, then your claims are not evidence based scientific ones.

I know what you mean, but I think that way of phrasing it suggests a two-phase process that simply isn’t there. It’s not the case that ChatGPT decides whether it does or does not know and answer and then knowingly proceeds to bullshit in the cases where it decides it doesn’t. ChatGPT does the same thing every time. Sometimes that results in sequences of words which are correct - i.e. relate to the real world accurately - and sometimes it does not. But both types of results are produced by the same process - a process which has no regard to the real world. ChatGPT is always bullshitting in the strict philosphical sense of making statements with no regard to their truth value, rather in service of some other goal.

Yeah, that’s kind of what I was trying to convey with “know” in quotes there, but was too lazy to explain away. It is not terribly good as a knowledge engine. It’s good as an advanced text-generation engine (and I have used it in my business several times already for helping formulate email replies, reviewing things I wrote, and the such.) It seems to do well on compare-contrast tasks. But straight-up fact finding, not so much. Which is why folks are starting to merge it with stuff like outside search engines. I find it fascinating and endlessly exciting (the honeymoon glow hasn’t worn off for me yet), but, yes, there are serious limitations to its abilities as of now (hey, it’s still relatively early in the game), and it’s best to understand what tasks it’s good at, and which it is not (based on what you wrote in your response.)

He did, though:

I never said that, I merely quoted it from the indicated source.

And of course, the point as a whole is facetious. I have perfect evidence that it’s possible for humans to do that, because I am a human, and I’m able to.

I’m having trouble parsing that. Testing for a difference between an advanced AI and what? If you’re asking whether it’s possible to produce a behavioral test for consciousness, then no.

That’s the wrong way around. Physics is ultimately given in terms of relations—mass is a relation to a standard kg, length to a standard m, and so on. The non-structural features are what these supervene on. One of my favorite quotes to that effect is due to Arthur Eddington:

Whenever we state the properties of a body in terms of physical quantities we are imparting knowledge as to the response of various metrical indicators to its presence, and nothing more. After all, knowledge of this kind is fairly comprehensive. A knowledge of the response of all kinds of objects—weighing-machines and other indicators— would determine completely its relation to its environment, leaving only its inner un-get-atable nature undetermined.

It’s that inner un-get-atable nature that is meant. As to how we know it exists: because so far, nobody has seen how to make any sense of the notion of a relation without there being anything related in that way.

Otherwise, if you’re interested in discussing my theory, there’s a thread for that.

I assumed your objection was to the other quote, not to him quoting you quoting someone else. My bad, but that’s hardly a misattribution. It’s just how quoting in Discourse works.

I don’t trust your subjective experience nearly as much as you do, I guess.

An advanced AI and a conscious human, I meant.

If there’s no test for consciousness, then this really IS an unscientific idea, just as much so as souls or gods or the multiverse. I’m not really interested in a religious discussion.

An “inner un-get-atable nature” is about the most pseudoscientific concept I can imagine. I’d ask you for evidence that this nature actually exists (not an argument from incredulity as you have provided, actual evidence) but of course you will tell me that this is impossible.

A “body” is an emergent property that forms out of the behavior of particles. Particles, according to our best models (“best” in the sense that they provide the most accurate predictions about the world so far) are emergent properties formed by the interactions of underlying quantum fields.

I see no place in this Standard Model for a “body” (what’s a body? Your car? An atom of aluminum? A quark?) To have an “un-get-atable” nature. Certainly, the Standard Model has gaps, and doesn’t explain everything - it lacks a theory of gravity and is at some scales incompatible with General Relativity.

But you’re proposing we throw out both models just so that we can posit that “bodies”, whatever the hell that means, have a special magical “un-get-atable” nature.

Its pseudoscience, through and through. Aside from soothing your incredulity, your model doesn’t do a better job of explaining observations than Quantum Field Theory or have any other benefit over the Standard Model, so forgive me if I reject it.

It’s perfectly well possible to change the quote attribution. This way, it looks like I’m trying to pass of somebody else’s words as my own.

You only need to trust your own.

It is an unscientific notion, absolutely—it’s a philosophical question. Not everything meaningful is empirical—math isn’t, moral values aren’t, aesthetic judgments aren’t, but they’re all still perfectly meaningful parts of our lives. If you aren’t interested in such a debate, then you’re free not to enter into it.

I guess you’re being facetious here, but obviously, Eddington means ‘body’ in the sense of ‘physical object’ (as e.g. in Einstein’s paper that laid the foundation of relativity, ‘On the electrodynamics of moving bodies’).

I’m not being facetious. You’re arguing that a “body” has an

I am challenging you to explain what on Earth this means. A body - a ball, a table, you and I, the skull of a Tyrannosaurus rex - are these “bodies”?

If you and I are “bodies”, what about the microbes which make up our microflora? A thorn trapped under the skin? An eyeball? Are these separate bodies or part of our bodies? Does your “body” include the dead skin cells which cover you? Does it include the contents of your digestive tract?

What about your cells? Aren’t they their own “bodies”? What about the cells of a colonial organism like a Portuguese Man o War? Is that one body, or a bunch of them?

All of these—any physical thing—is a body in this sense. I really don’t get what your problem is here. Einstein wasn’t talking about living beings, or whatever you seem to think ‘body’ connotes. It’s just a word for stuff.

The point is that we know stuff only through how it relates to us, and thus, not what it is beyond these relations—a sort of Kantian ‘thing in itself’, if you will.

But none of these things “exist” as platonic ideals. They emerge from the interaction of the components that make them up.

A chair doesn’t have an “un-get-atable nature”, it has properties that emerge as a result of the more fundamental components that make the chair up. The pieces of the chair (legs, seat, armrest, back) have properties that emerge because they’re made of wood and have certain shapes, and together these properties form a collective that we have decided matches our idea of a “chair”; and someone in another culture may draw the line somewhere else, or insist that a stool is in fact distinct from a chair.

I don’t understand where you believe this “un-get-atable nature” comes into play, what it even means to have one, or how, if it is so un-get-atable, it is supposed to separate man from machine.

It’s called “AI hallucinations” and is a well-known current behaviour. But it’s not an intrinsic one and can be mitigated by various methods, for example by devising a confidence-rating scheme. I’m tired of debating this philosophical concept of “understanding” and I’ll just say that the numerous examples of ChatGPT being able to generalize a math puzzle by creating a generalized equation for solving all such problems in the general case is all the evidence I need for genuine concept formation. In fact the generalization of such problems would be beyond the capabilities of some humans, which leads me to conclude that ChatGPT would do fairly well on IQ tests, much as it has done well on SATs and many college-level and professional qualification tests. Unless you mean to imply that humans who pass professional certification tests don’t actually “understand” anything either.

It’s just the intrinsic properties that support the relations we observe in the world. There’s nothing mysterious about it, really. What we see of things is the way they relate to us. But we don’t see things apart from that relation. I mean, try to imagine a world without you. Give it a go, really lean into whatever you come up with.

Whatever you’ve imagined: it was wrong. Because whatever we imagine exists from a certain point of view; but if you’re not there, then neither is your point of view. But it’s intrinsic to our experience that it is from some given point of view—but that only shows us things as they relate to us within that point of view, not the things themselves.

Consider how you only ever see surfaces. Even when breaking something apart, there’s only new surface at the breaks. But could the world be all surface? No: a surface is a boundary, and a boundary limits something. So there is something un-get-atable beyond the surfaces, because there is no surface without interior it bounds.

There aren’t any intrinsic properties to you, or a chair, or a balloon. You may think that there are, but if you zoom in, you find the properties of the more fundamental materials that make up you or a chair or a balloon - the properties of meat, and latex, and wood. The properties of you, or a balloon, or a chair emerge from the properties of meat, or latex and helium, or wood.

Why do you elevate this fictional “thing itself” to a metaphysically important status? There IS NO chair apart from the interactions of components that form a chair. “Chair” is a social construct that we have assigned to objects whose observable properties meet certain criteria.

Things that are below the surface aren’t “un-get-atable”, not at all. We orbit the sun based on the gravity of its entire mass, not just the parts we can see. We can use things like X-rays or ultrasound to image or measure what happens below the surface.

Your “un-get-atable” nature isn’t like that; it can’t be interacted with or measured in any way, it doesn’t bend space-time or reflect X-rays. That’s what tells me that it doesn’t actually exist.

It’s all that’s ever interacted with or measured. It’s what makes interactions or measurements possible. It stands to the results of these interactions as ‘size’ stands to ‘is taller than’. If you know of a way something could be taller than something else without both things having a size, feel free to tell me.

‘Size’ is quite a vague term. You could be asking about something’s mass, or volume, or surface area, or height, or width, or depth…

You mention “taller than”, so let’s use that as an example of size. Height. If I ask you how tall something is, you may tell me using a measurement - the meter, or the inch, or the yard, or a furlong. Regardless of the measurement you use, it’s a relational one.

Objects don’t have an inherent “height”. When we talk about height, we are defining a measurement, and then comparing it to another measurement - for example, the meter, which is the distance that light travels in 1/299792458 seconds.

Instead of asking “how tall are you”, I can ask “how long does it take for light to travel the length of your body?”. This makes it much more obvious that “height” isn’t some mystical metaphysical property inherent to you. Height is an emergent property of the particles that make you up - they take up a certain amount of space, and it would take light a certain amount of time to travel through that space.

Your height, like everything else, is an emergent property arising from the interactions of more fundamental forces making up your body.

You, a human living in Earth culture, have this preconceived notion that “height” is of metaphysical importance. But it isn’t; it’s something humans defined.

Which depends on the notions of ‘distance’ and ‘time’. Which are both relational.

Anyhow, we have strayed quite far from the topic of this thread. The simple point remains: ChatGPT can’t derive references from mere relations among words. This isn’t something I can show you empirically; it’s just mathematics, or perhaps, to you, religion.

Do you have any links for examples of GPT success in math? Especially calculus? I Googled it and came up blank. I believe it was supposed to have solved an unsolved proof. I couldn’t find anything. Is there one for the generalized math puzzle above?