ChatGPT gave seven different reasons on its own why it can be wrong in such cases, and you have correctly pointed out the one reason that least applies in this case. I almost noted that one in my earlier post to preempt any distraction from it. Simply ignore that one; there are six more reasons it gave. And I noted which one was at the heart of the current test case. (Having said all that, though, context self-anchoring is also a phenomenon, where ChatGPT climbs up its own techno-babble butt in a self-reinforcing way, despite prompt details.)
It gave a wrong one, and it was a paper tiger in defending it. If an entity can’t withstand a puff of pushback on its scientific explanation, how meaningful is that explanation? I argue that a requirement and minimal bar to cross in scientific discourse is that an explanation holds up in the face of even minor “what about?” questioning. ChatGPT cannot do that because it isn’t making any sort of scientifically resilient arguments. It’s making scientifically resilient-sounding arguments.
The ChatGPT follow up gives multiple reasons why even with visually apparent flatness, flow can happen in the oven environment. It is unclear how your experiment contradicts those points and how you can be sure there is a lack of any tilt, concavity, or undulations at a level that would render ChatGPT’s arguments irrelevant, even during warping through the heating and cooking process or during handling. You are saying (1) that ChatGPT gave an initial explanation and (2) that your experiment is consistent with it, but then you are stopping short of (3) acknowledging that ChatGPT also points out how the experiment is inadequate or dealing with those stated counterpoints.
In this AI thread I’m rather less interested in the exact physics than I am in making the points that ChatGPT itself is happily arguing that its clever, technically niche explanation is wrong; that ChatGPT itself gives a slew of reasons (not every last one of which applies here, sure) why this sort of false explanation is to be expected in technical questions; and that perhaps we really cannot trust it for this stuff in the first place. And it is precisely in such contexts that non-experts will feel that they’ve gotten access to some deep insight when as often as not it’s just reasonable sounding bollocks.
Apologies for extending the hijack. I’ll stop with the above, but maybe I’ll return with more thread-suitable stupid AI shit. I do feel like I’m shoveling it off my lawn daily (nay, hourly).
@Pasta, I thought your post was perfectly on topic of AI doing stupid shit. We need to understand AI’s limitations and how it’s actually working in order to contextualize the stupid shit it does.
AI can be very useful in the right contexts; unfortunately, 95% of the current use is wrong and makes life worse. If people understood it better, maybe we could get that down to 90%.
One more, and then I’ll get off this tangent, too.
The point of my original mention of this whole fish business was that I presented GPT with a problem, it suggested the likely cause, and a number of solutions.
I implemented one of the suggested solutions, and it worked. Problem solved!
Common to the responses we both received is how hot oil behaves on a hot sheet of foil – that relatively small amounts would so readily flow across is not intuitively obvious. And that was the key to solving the problem.
Most of the controversy is about the mechanism that draws it under the foil once its over the edge. There may be some disagreement about the role of capillary action, which may not in fact be a major factor, but the concept of thin-film flow certainly is, and the analysis produced a useful practical answer with an explanation that fit the facts even if wrong about the capillary action detail which ultimately doesn’t matter.
I had no idea that 95% of my use of GPT is wrong and is making my life worse, apparently because I’m too stupid to understand it. My life must be a total mess by now!
You’re just being an obnoxious smartass. It means “fits the facts” in the sense of scientific theory, which is able to produce useful predictions, but which may later need refinement. It advances our understanding and ability to make predictions – and indeed live better lives – even if not complete or completely correct initially.
It’s the very nature of science – exemplified by refinements of Newtonian physics, in early germ theory (which saved millions of lives despite only a partial understanding), in early atomic models, in the evolution of early astronomy to modern cosmology … it’s how science and discovery fucking works!
Science is very much NOT finding something that fits the facts even if wrong. Sometimes that happens during the course of doing science, but it is not regarded as a successful outcome.
You’ve already insulted at least one poster in this thread, completely without provocation. Why do you take this so personally? You’d think you’d have had a love child with ChatGPT given your aggressive stanning of it. The logical contortions you’re working yourself into to defend it really illustrates the point many are making in this thread. People, even really intelligent people, often fail to see the ways it shapes human cognition. You think just because something sounds scientific and gives you the result you want must be correct. That is not scientific!
Another example from clinical psychology. There’s this thing that is all the rage right now called EFT tapping. It is dumb as hell but it has taken in even the best of therapists, like the one I’m currently seeing. You have to repeat some kind of affirmative phrase to yourself while tapping various pressure points on your body. The “proof” that it’s working is that when you’re done you feel calmer.
There is no evidence to support EFT. Anyone who knows anything about cognitive therapy can tell you that you probably feel a little better afterward because you’ve stopped ruminating. You could substitute any other distracting behavior with EFT and probably get the same results. There’s probably also a little mild cognitive restructuring in there too.
And doing this tapping shit is my homework for the week.
Damn it. Uncritical thinking is a pain in my ass right now.
We already live in a deeply unscientific culture. Any time we cede our cognitive capacity to prove a point, to mindlessly solve a problem we don’t understand, any time we outsource our ability to think, we are contributing to that “if it sounds right to me, it’s probably right” culture which is currently tearing apart the fabric of existence as we know it. The stakes couldn’t be higher. Let’s not rest on our laurels.
That totally misses the point. Science is very much concerned with proposing explanations for phenomena, and testing them to see if the explanations are supported by empirical evidence. If they are, the theory is at least tentatively accepted and may have very useful impacts. Louis Pasteur was wrong about a long of things related to germs and vaccines, but still saved millions of lives.
And then if the theory is contradicted by some other evidence, refining those explanations, or occasionally revising them altogether. Science is replete with theories that are either incomplete or partially incorrect, and my little experiment here was a tiny microcosm of that. Here I got a “theory” that empirically checked out, and that was useful because much of it was right even if part of it might be wrong (e.g.- thin-film flow but not necessarily capillary action). You’re really being unreasonably judgmental here in your zeal to condemn AI.
Of course it can sometimes get things completely wrong, but that’s a different issue because in my experience it doesn’t happen often. Over the past several months I’ve asked it questions like “what questions should I ask my medical specialist at my upcoming appointment” (the doctor was so impressed she asked if I was in the medical profession myself) and many other things, and the suggestions worked. That’s my point.
FWIW, I make a point of depersonalizing all my interactions on the infrequent occasions where I visit ChatGPT. I never use the second person for example never addressing the program as “you”; and I never phrase a question about ChatGPT in a way that would assume agency or volition to the program. Think of a cheesy 1950s SF where a robot always stiltedly refers to itself in the third person.
Wait. When I tried this, it sent me to fucking fucking Grok, which then told me the service was unavailable. Thanks government!
Meanwhile, I feel like since I read about LLM’s or more generally AI pretty much every day, I was on top of things. Apparently not, as I had not even heard of Clawdbot, which is not at all the same thing as Claude.
I hope doctors using AI in surgery are careful to orient the camera in the correct direction. “The patient’s feet are where his head should be. He’ll never be able to walk. Better amputate his legs.”
I don’t know that I believe that the world economy is going to collapse overnight due to AI, but it’s definitely going to have some unforseen consequences. Let’s hope they are bearable. As a grant writer I’m probably first on the chopping block, but as a grants manager there’s hope for me yet. One thing AI can’t do is walk into someone’s office and say “I need this right fucking now!” Grant “writing” is a lot more than writing. It’s organizing and coordinating and talking people out of dumb decisions. I recently asked in an AI thread whether AI could be used to assist me in some basic work tasks, and the answer was no, not really.