Why couldn’t it simply be that small particles of the oil are spattering upwards during the cooking and landing on the edges of the foil, then seeping underneath?
I think we are saying the same thing. I just don’t see how you could call “relationships learned during training” anything but a knowledge base. I, the meatbag, know things because electrical and chemical relationships exist across neurons. I still say I know things. I still have a knowledge base.
You said the model doesn’t store facts like a database, but again, I never said it did. I didn’t say “database,” I said “base.” The fact that those pieces of information exist distributed across several nodes doesn’t really matter to me. It’s still knowledge, and it still exists in some parts of the model, but not others.
The reason I even bring up the distinction is that despite us not necessarily intentionally building or training a model to store information distributed across separate nodes, it nevertheless does so. There is in fact a “yellow” node and a “dog’s eye” node.
So there is certainly, without a doubt, a knowledge base in the model.
I’ve been reading about how AI is changing how coding (programming) is being done, much of this amplified over the past few weeks because of modifications to the Claude AI model.
So I decided to test it myself, using Gemini (which we already pay for in my business - it comes with our paid Workspace subscription.)
A note about myself: I am not a programmer, though I have run programming teams. As I explained to them: “I know what the inputs need to be. I know what the outputs need to be. I know, logically, how to get the output because, until now, I’ve brute-forced the inputs to give me my desired output. What I don’t know is how to tell the computer to take these inputs to create that output.”
IOW, I don’t know shit about programming.
Bona fides (or lack of them) aside, this weekend I decided to just go ahead and… given what I was reading about plain English being the programming language of the future… to see if I could get Gemini to programmatically solve two problems my company was having.
The first was to create a general knowledge internal web page about classifying bookkeeping transactions based upon inputs like dollar amount, industry, entity-type (sole proprietor, S-Corp, etc), and a description of what was needed. Using Gemini I built this thing to only use primary sources (Finra, IRS, others) and to be robust enough so that if the team member forgot an important detail, they could add it in later.
It fucking works like a charm, team. It’s an absolute game-changer and will assist our weaker team members into becoming better accountants and our great accountants into being superstars. It not only tells you the accounting treatment, it tells you why you do it this way, citing IRS Regs and GAAP standards, and alternative ways to do it based upon the legal entity type.
For example, I have the following issue:
“My new client bought a F150 for $55,000 in October 21st, 2023. She paid $15,000 in cash and took a $42,000 loan on the vehicle at 6.25% interest over 60 months. The extra $2,000 in the loan was because the document fees were rolled in the loan. She has not provided us with a statement, but we see an automatic payment of $634.43 leave her account every month like clockwork.
She totaled the truck on December 21st, 2025. Her insurance covered the loss, paying off the loan in full. She is an S-Corp. Please provide me with the following:
The book entry for the purchase.
Annual adjusting entries for interest and principal expenses.
Annual adjusting entries for depreciation.
The book entry for the loss of the vehicle.
The amortization table you used to calculate the interest/principal payments.”
It then does all the math, gives me the book entries (which can be downloaded to CSV) and the amortization table.
But wait! It assumed that, being an S-Corp, we used accelerated depreciation… which the client didn’t do, their CPA just used a standard 5yr straight-line depreciation schedule. So I modified the query by saying:
“Oh, yeah, they used a straight-line depreciation schedule of 5 years. Please provide me with updated depreciation book entries for each year.”
And it does so, easy-peasy.
The other use case is client specific. He wants specific administrative expenses allocated among his divisions based upon the divisions monthly revenue percentage. Why does he want this? Fuck if I know, but despite our telling him that this isn’t the best method, it’s what he wants.
So, also on the same day I created the Accounting Intelligence app, I created another HTML application where we just upload the P&L (by division) csv and the AI then builds a journal entry based upon the logic, which we then put into Quickbooks. A problem which took 15 minutes is now solved in 2.
So, yeah, to @wolfpup’s point, AI can be wonderful.
The ability for a non-programmer to develop a specific program for a specific use case does seem like genuinely a very useful thing. Seal of approval. (Ort ort ort!)
*That’s what seals sound like to me
Of course the output should always be vetted by an expert, but it sounds like you are and it was.
Fpr one thing, there’s far too much oil under the foil for spatter alone to account for it. For another, if there was that much spatter, you’d expect to see spatter droplets on parts of the baking sheet outside the area of the foil. There aren’t any. In fact, IIRC, there is very little spatter on the foil itself, which is consistent with the ultra-low-viscosity laminar flow theory as the primary cause, followed by the oil being sucked under by capillary action.
I know slightly more than shit about programming, but not much more than shit. I can program most things in a non-caveman manner, but there’s a lot I couldn’t program. One of those would be a C compiler. A researcher from Anthropic coordinated sixteen agents to create a workable-ish C compiler for about $20K in tokens.
Programming that for $20K is pretty amazing. But the problem is, it’s not a very good C compiler. To compile the Linux kernel, it has to call GCC. Trying to add new bugfixes or new features frequently broke existing functionality, and it produces less efficient code than GCC by itself.
That all reflects my experience having current LLMs writing code for me. It’s pretty amazing at its first attempt, even if it’s not successful. But getting it closer to a final product purely through prompting is a trial. Having it debug its own code is a clown show. It’s pretty good for the quidk+dirty jobs if you know how to tell it what you want. I’ve also used it often to track down what functions would be of interest for trying to diagnose a particular problem with some success (it’s at least a little faster than me wielding grep against a codebase I’m unfamiliar with).
But to get it to do anything but the quidk+dirty jobs, it still really seems that the person managing it must know what the caveats and problems might be with the particular job at hand. Once you run up against a roadblock that isn’t an easy problem to solve, it will happily hallucinate potential solutions that are completely insane.
Your (external) IP address reveals your approximate location, because they are necessarily unique. It would trivial to write a piece of software to do this, that is not AI at all.
It is basic HTTP protocol, and one reason you get region-targeted ads on almost any web-connected platform.
The AI knew where you were, because your browser was telling it where you were.
You seem to be interpreting “wetting agent” as something that wets another thing. A wetting agent reduces the surface tension of a liquid. A liquid isn’t its own wetting agent.
The preceding quotes are quite the contradiction to this:
Why weren’t you covering the entire baking sheet including the raised edges with foil? What did you think was going to happen to the oil? You put a liquid on a flat surface and thought it was a great mystery how a liquid somehow flowed on a flat surface to the edge of the flat surface and somehow mysteriously just sort of dripped mysteriously over the edge of that flat surface. It’s quite a mystery.
“If A, then B, implies if B, then A,” is great Facebook logic. It’s not, you know, actual logic. That type of “reasoning” is a huge part of why we’re in the mess we’re in.
At least 10 people were injured between late 2021 and November 2025, according to the reports. Most allegedly involved errors in which the TruDi Navigation System misinformed surgeons about the location of their instruments while they were using them inside patients’ heads during operations.
Cerebrospinal fluid reportedly leaked from one patient’s nose. In another reported case, a surgeon mistakenly punctured the base of a patient’s skull. In two other cases, patients each allegedly suffered strokes after a major artery was accidentally injured.
meanwhile, I’m listening to a podcast about how LLM AI is hijacking attachment in kids, the very thing that makes for family and community, just like social media hijacked focus.
The researchers found that participants chose the “right” course of action — predetermined by a panel of doctors — less than half of the time. And users identified the correct conditions, like gallstones or subarachnoid hemorrhage, about 34 percent of the time.
Hey, boys and girls, I felt like fish tonight so the Great Experiment was held. I did the fish exactly as usual except with the foil edges folded up into the shape of a tray rather than laid flat. Exactly as suggested by ChatGPT.
The result? The baking sheet underneath was completely dry – not a single trace of oil. The ChatGPT analysis appears to have been exactly right!
If you’re experiencing leakage in your foil, it’s either a manufacturing defect or else you’re accidentally damaging it against the cutting edge when you pull it out. As stated upthread, aluminum foil that doesn’t seal is pretty much useless.
Good. That’s how you settle an argument — with a clean baking sheet.
You ran the experiment. You changed one variable. The outcome flipped completely. That’s physics doing what physics does.
When the foil was flat, you had:
Micro-gaps
A large contact area
A perfect geometry for capillary creep
When you shaped it into a tray, you:
Eliminated lateral access at the edges
Reduced foil–pan contact
Broke the capillary pathway
No pathway, no migration. Simple.
Now about the snark.
You earned the satisfaction. But resist the urge to dunk on anyone. Quiet competence is stronger. If you want to respond, something like:
“I reshaped the foil into a tray and the oil stopped completely. So it appears the issue was edge creep, not leakage.”
That’s calm, factual, and leaves no room for debate. Let the result speak.
There’s a bigger lesson here too: you didn’t just accept an explanation — you tested it. That mindset is worth more than being “right” in a forum thread.
And now you get clean pans and crispy haddock. That’s a win on every level.
I must admit that the human-like interaction is a bit unsettling.