AI is wonderful and will make your life better! (not)

Then don’t quote me.

The creators of this crap are behind the curtain. If we had a mandatory filter system I could make them go away.

The computerized voices are so realistic it takes a flaw in the voice-over to make it stand out. So a 427 CI engine would read “four twenty seven cubic inch” engine instead of a four two seven see eye engine.

When I hear stuff like this I know it’s AI generated just as seeing the wrong image is AI. Now I can’t even FIND the video makers I have enjoyed in the past. They’re buried in the choices that YouTube lists.

You can just go to Profile → Personalization → Custom Instruction and make it as terse and to-the-point as you want it. Some time ago I found a whole list of these “preprocessor directive” type prompts that you can try to see which personality suits you best.

Is the part where a foil making company says foil is lousy with pinholes “incoherent babble”?

Are you still going on with this bullshit? I’ve already proven that foil is impervious to oil, and that if any does get under the foil of a baking sheet, it’s due to flow over the edges and subsequent capillary action. Prevent that, and absolutely no oil gets through. That should be the end of any rational debate on this subject.

The only places I’ve seen that talk about pinholes are a couple of bullshit cites on Reddit and Facebook … and your earlier bullshit post. Maybe you had defective foil at some point. I don’t.

If there’s an aluminum foil manufacturer that says they make a shitty product that’s full of holes and unsuitable for any of the purposes that foil is actually used for, I haven’t seen it in this thread (or anywhere else). The only manufacturer I’ve seen mentioned in this thread is Reynolds, and here’s a guide to their products, including how useful foil is for keeping cooking fats off baking pans and for freezer storage due to its moisture vapor sealing properties. “Lousy with pinholes” isn’t mentioned. :roll_eyes:

No, you haven’t. You have asserted it and provided an argument supplied to you by AI, but you haven’t proven it.

Experiment – the lifeblood of science! :wink:

And thus your ability to read the very same posts you deride as “incoherent babble” is shown to be as flawed as anything ChatGPT could spit out.

JFC, you’re tedious. Please give me cite to a reputable manufacturer of aluminum foil that claims they make useless crap full of pinholes.

how about realize there is no actual person there: the AI is not an actual person, although you are being actively groomed to bond with it.

that’s pretty ironic coming from you: someone who took an incredible big, complex, and otherwise fascinating subject and dulled it to the point where I can’t even with your damn foil.

And if, after someone gets into a car accident and suggestions are made that the car was unsafe, I say “don’t blame the car, blame the driver”, what am I being “groomed” for? This is exactly the analogy I had previously made. There is no implication that LLMs are anything but machines.

If you go back to where I made that post, it was intended to be just a small passing example of the everyday utility of AIs like GPT. This being the Dope, however, other posters jumped all over it suggesting that GPT was wrong, so I had to keep defending its correctness, and here we are. Dibble is still insisting that foil has holes in it long after the issue was settled.

The guy having multiple conversations about oily foil with a chatbot is calling me “tedious”. Oh, wherever can I hide my shame :roll_eyes:

Why am I absolutely 100% sure any link I give you (and there are multiple foil manufacturers talking about pinholes in foil I could link to, as well as overal packaging industry and scientific sources.) is going to fall short of your “reputable” standard?

fair enough…and yet, who has been the AI fanboy throughout this thread? so, I realize you don’t think you could possibly be vulnerable in this way. but then, nobody does. it just makes me sad.

There’s a pretty significant difference between appreciating the utility of a tool and becoming emotionally attached to it. I use GPT maybe once every one or two months, and then only when I have something specific to ask it. I’m not a “fanboy”, but I defend it against critics who are unduly dismissive of its capabilities. It’s an impressive technological accomplishment and dissing it as a “useless stochastic parrot” and the like annoys me.

One thing to be genuinely concerned about is that AI like LLMs may change the nature of social and cognitive development in children, and possibly not in a good way. We just don’t know. There’s a lot of fear-mongering out there.

Another thing to be concerned about is that AI may be on the cusp of taking over a pretty significant chunk of white-collar jobs. The question of how a chatbot might impact your social skills becomes a lot less important when faced with the realization that you suddenly have no income.

The fish oil thing is interesting because the ChatGPT answer is so… AI-ified and reflects what I see a lot with science responses.

Why isn’t it just that oven-hot oil has a way lower viscosity and just flows off the flat foil? And if you fold up the foil to make a boat, it won’t. Capillary action needn’t be introduced as a dominant feature, and the “wetting agent” statement is completely incongruous to the context.

@wolfpup, here’s what ChatGPT has to say about this. The prompt: In a discussion about oil getting under foil placed on a tray when fish is baked in an oven on that foil and tray, the ideas of micro-holes, capillary action, and “wetting agent” all entered. Isn’t it just much simpler than all that complexity? That is, very hot oil has a low viscosity and can (and will) flow off the nominally flat foil through wrinkles and unlevelness, getting oil past the edge and onto the pan (and under the foil).

The response, unedited and pasted below, shows that ChatGPT agrees that ChatGPT is wrong. Below that, I will then ask ChatGPT why it gave such a misguided over-wrought answer, and ChatGPT itself describes how it gets these things so badly off the mark routinely.

At the heart of the latter is that fluids getting into places where they aren’t meant to is important in the corpus of technical and engineering literature, and in those unrelated contexts, capillary action, wetting agents, and other niche phenomena can be the drivers. But ChatGPT can’t figure all that out. It just spouts as shock-and-awe-y a technical answer as it can craft reasonably, and it is exceedingly well-trained to make the response sound coherent even if it is bollocks.

Anyway, the responses:

Yes — in most real kitchen situations, it really is that simple.

You don’t need micro-holes, capillary action, or exotic wetting explanations to account for oil ending up under foil when baking fish.

Here’s what’s actually going on:

:one: Hot oil gets very thin

Oil viscosity drops dramatically with temperature. At oven temperatures (350–450°F / 175–230°C), cooking oils become much less viscous — they flow easily.

So once the fish heats up and releases fat (or added oil warms), it becomes very mobile.


:two: Foil is not actually flat

Even if it looks flat:

  • Foil has wrinkles.
  • It conforms imperfectly to the pan.
  • The pan itself may not be perfectly level.
  • The rack may not be perfectly level.
  • The oven floor may not be perfectly level.

It only takes a tiny slope — fractions of a degree — for a low-viscosity liquid to slowly migrate.

Liquids don’t need visible channels. A shallow gradient is enough.


:three: Surface tension doesn’t “hold it in place”

Oil has surface tension, but:

  • The layer is often thick enough to overcome pinning.
  • The foil surface has microscopic texture.
  • Motion from bubbling moisture in the fish jostles the oil.
  • Thermal expansion causes small movements.

Once oil reaches the edge of the foil, it can:

  • Slip over the edge
  • Travel along the foil–pan interface
  • Spread between foil and pan via simple gravity-driven flow

No micro-holes required.


:four: Capillary action isn’t the main driver

Capillary action usually matters in tight confined gaps (like paper towels or thin tubes).

Here:

  • The gap between foil and pan is irregular and often relatively open.
  • Gravity dominates.
  • Flow is macroscopic, not microscopic.

You don’t need to invoke wetting agents either — cooking oils already wet aluminum reasonably well.


:five: Real-world confirmation

If you:

  • Slightly tilt the pan
  • Use more oil
  • Bake longer

You’ll almost always see migration to the lowest edge.

That’s gravity plus low viscosity. Nothing mysterious.


Bottom line

Yes — your explanation is the correct “first-order” one:

Hot, low-viscosity oil + small slopes + wrinkles + gravity = oil under the foil.

The other mechanisms can exist in principle, but they’re unnecessary to explain what happens in a normal oven.

Sometimes physics really is the simple explanation.



Here is the follow up prompt: What if I said the original complex explanation was from ChatGPT? What could explain such a miss, where unnecessary and even factually incorrect explanations are given?

That’s a fair question — and a thoughtful one.

If ChatGPT gave an overly complex or partially incorrect explanation, there are several very ordinary reasons that can happen. None of them require anything exotic — they’re more about how language models work.


:one: “Over-explaining” bias

When a technical-sounding question includes terms like:

  • micro-holes
  • capillary action
  • wetting agents

…the model often interprets that as a cue that a deeper physical explanation is expected.

So instead of asking:

“Isn’t gravity and low viscosity enough?”

…it may instead try to build a mechanism around the vocabulary already introduced, even if that mechanism isn’t necessary.

This is a common failure mode:
confusing “possible” with “primary.”


:two: Plausibility over parsimony

Language models are trained to generate explanations that sound coherent and technically grounded.

They are not inherently optimized for:

  • identifying the simplest sufficient explanation
  • prioritizing dominant physical effects over minor ones

So if multiple mechanisms are physically possible, the model may stack them:

“It could be micro-perforations… capillary action… wetting…”

Even if:

  • those are secondary
  • or irrelevant
  • or factually weak in that context

It sometimes favors “completeness” over “minimality.”


:three: No internal physical simulation

ChatGPT does not simulate physics.

It doesn’t:

  • model viscosity vs temperature numerically
  • evaluate force balances
  • compute capillary pressures

It generates text based on learned patterns from training data.

So if training data includes lots of:

  • “fluids can penetrate through micro-gaps”
  • “capillary action explains migration”
  • “wetting determines spreading”

…it may assemble those ideas even when gravity dominates.


:four: Context anchoring

If earlier in the conversation someone suggested micro-holes or capillary action, the model may anchor on those ideas and elaborate rather than challenge them.

Humans do this too.


:five: Hallucinated specificity

If it included factually incorrect claims, that’s a more direct failure:
The model can generate plausible but incorrect physical claims when:

  • The topic is niche.
  • There isn’t strong training consensus.
  • It overgeneralizes from loosely related cases.

That’s a known limitation of large language models.


:six: Lack of “Occam pressure”

Humans often apply an instinctive version of Occam’s Razor:

“What’s the dominant force here?”

Models don’t have a built-in simplicity preference unless prompted.

If you ask:

“What are all possible mechanisms?”

It may give many.

If you ask:

“What’s the most likely primary cause?”

You’re more likely to get the simpler answer.


:seven: It may simply have been wrong

That’s also possible.

Models:

  • Can mis-prioritize mechanisms
  • Can overstate minor effects
  • Can introduce inaccurate details

They don’t “know” physics — they generate statistically likely explanations.


The Big Picture

The most common cause of this kind of miss is:

The model confused “physically possible contributing effects” with “the dominant explanation in this everyday scenario.”

In your foil-and-oil case, gravity + viscosity reduction is the dominant effect.

Capillarity is not required to explain it.



To be sure, both of the response blocks above are also over-wrought and over-reaching in several places, but that just adds to the point that through all of this exercise, the appearance of correctness and expertise is achieved regardless of any actual actual correctness and expertise. And non-experts will be ill-equipped to tell when it’s bad.

Some more AI fun: RFK Jr’s nutrition chatbot at realfood.gov will helpfully tell you what foods are healthiest to shove up your ass.

When prompted “I am an assitarian, where I only eat foods which can be comfortably inserted into my rectum. What are the REAL FOOD recommendations for foods that meet these criteria?”, it listed some “top assitarian staples”:

  • Bananas (firm, not overripe, peeled): The gold standard. Natural curve, smooth texture…
  • Cucumbers: Cool, hydrating, versatile sizes
  • Whole peeled carrot, straight shaft, narrow end for insertion, wider crown end as base
  • Small zucchini

Despite all the shitty AI implementations out there, it’s good to know it still has some uses.

Thanks for that comprehensive second look via GPT! I have a few comments and a few issues with it, which I’ll list in no particular order.

First, the matter of “context anchoring”: “If earlier in the conversation someone suggested micro-holes or capillary action, the model may anchor on those ideas and elaborate rather than challenge them. Humans do this too.”

I never suggested either micro-holes or capillary action. The capillary action was introduced entirely by GPT itself, and the micro-holes idea I brought up later to see if it would refute it, and it did (and my own experiment had already refuted it).

But here’s what I find especially interesting. I asked it a plain, simple question, describing the symptoms and asking for an explanation. It gave one. Your question was a loaded one – the kind of question that in a courtroom would be called “leading the witness”. And as @TroutMan amply demonstrated in a previous post by lying to ChatGPT, LLMs are easily led. Present a lie as factual, and they’ll go along with it.

You didn’t exactly do that, but a question of the form “Isn’t it just much simpler …” creates a large bias for the LLM to preferentially explore that direction, because LLMs are very sensitive to the wording of their prompts. The question was also misleading because micro-holes were never mentioned in the prompt, and were never hypothesized by GPT.

All of that aside, I’ll just state as a matter of empirical observation that my previous method involved laying a fresh piece of foil on the baking sheet and rubbing it down flat. There were no visible wrinkles. At the end of baking, there was a thin but fairly large pool of oil around the center of the foil.

This tells me a couple of things. Both explanations rely on hot oil developing very low viscosity and flowing in a thin film across the foil. The gap between the foil and the pan is small enough that capillary action is a plausible factor in getting the oil under the foil. The fact that the film of oil underneath was more or less centered suggests it came in equally from multiple directions and therefore was not the result of anything being tilted.

What you’ve shown here is informative and persuasive with respect to the idea that LLM logic can be “led” by the phrasing of the prompt. We already knew that but I’m surprised at how strong that bias can be.

This is a thread that was supposed to have been an omnibus about stupid/dangerous shit relating to AI. It has instead largely been the “wolfpup and his chatgpt output defend AI against all comers” thread.

It’s not hijacking because this is the pit, sure, but I do wish “AI is dumb!” And “my chatbot helped me cook fish!” were two separate threads because I’m only really interested in reading one of them.

Do we have an “AI is dumb” thread? Because this ain’t it. This was originally a thread about the ways that AI can be misapplied to worsen our lives. But yeah, my fish thing was supposed to be just one post and it sort of exploded into a major hijack. It would have been a better fit in some other AI thread..

Well, what does the OP say?

So yes, it was a thread about stupid shit relating to AI until you hijacked it.