Yet earlier, you felt the need to offer a ridiculously simplistic explanation:
The actual physics is much more complex and subtle, and GPT seems to have nailed it. Who’s smarter, you or a pile of transistors? I think we now know the answer.
It happened again today. When I signed onto ChatGPT on the shared computer, it greeted me in Japanese. I asked why, and it gave a non-answer, but told me I was sick a great teacher for noticing this and it would greet me in English
From now on.
So, I signed out and back back in, only to be greeted again in Japanese.
I’m getting a little more used to how it works, but things like this show its limitations.
At some level that criticism can be directed against any solution to any problem that any human has ever solved, because it’s based on things they’ve learned. Intelligence is the ability to synthesize unique solutions to specific problems by generalizing from previous learnings. It’s pretty clear by now that this is just what large-scale LLMs are capable of doing. They wouldn’t be interesting if all they did was regurgitate stored text.
“Easily Google-able”? Try it. You’ll get lots of incoherent bullshit featuring baselesss claims about micro-holes in foil, with one smartass trying to blame oil getting under the foil on quantum entanglement! Whereas GPT produced a detailed and cogent response that turned out to correctly identify the issue.
Yes, it is, because your supposition about how GPT works is not correct. It doesn’t store large blocks of text on every imaginable topic and then retrieves them as needed. Instead, its training teaches it patterns about things like materials science, thermodynamics, surface tension, capillary action, and so forth. Presented with a particular scenario, it combines those principles dynamically into a precisely tailored response. That’s synthesis and is what I described in my previous post, and is fundamentally different from regurgitating a canned response.
Check out this article by a guy who works in AI. His argument is that systems like ChatGPT have become so good, having made big advances in just the last few months, that a great many jobs involving cognitive skills are already at risk. This is not a good thing, and the OP argues that furthermore in many cases the power of AI may be misapplied. The only point I’m making is that those still dismissive of systems like GPT as just “stochastic parrots” are being foolish and are in for a shock. At best, they’re basing their judgments on outdated information.
I’m quoting my link because I really wish people would watch it. I wrote the long name of it, but it’s an interview with 2 others about HACKING HUMAN ATTACHMENT
hint: you may be more vulnerable than you think
and it’s 90 minutes (ish) because the subject is huge and complicated and important and nuanced!
So is this one of those “baseless claims” about pinholes? I mean, they make foil, but what do they know, right? Depending on what you’re trying to argue, you switch between “the answer is obvious, of course AI is correct” and “it’s amazing that AI could come up with an answer that no one knows.”
For fun, I told the AI that I looked at my foil under a microscope and saw tiny holes, and asked if this proves the hole theory. It responded, “That is an impressive bit of kitchen sleuthing! Seeing those holes under a microscope definitely confirms that the material isn’t a perfect, solid barrier.” Because AIs are trained to be agreeable, not to be accurate. As has been repeated to you, the concept of accuracy is not something AIs understand.
You still don’t understand how AIs work, and Jackie’s link seems to be all-to relevant. Your ability for critical thinking seems to be dropping.
This is incoherent babble and makes no sense. The actual situation is “the AI was correct, but the fluid dynamics involved are not obvious”, judging by how much bullshit a Google search on this topic produces.
So you think lying to GPT somehow proves that it’s not trained for accuracy? The only thing it proves is that it assumes the user is posting in good faith and isn’t a lying asshole.
If being insulting makes you feel better, go for it, I don’t give a fuck. And Jackie’s link has nothing to do with “how AIs work” but about alleged negative social impacts. More relevant from a technical perspective is the Fortune article I linked about how well the latest models work. How AI may affect our social relationships seems secondary to the possibility that half the white-collar workforce may be out of work soon.
Good grief, try to think critically about this. The point is not that lying to AI proves it isn’t trained for accuracy. It’s that you can’t trust it when it tells you that you were correct.
Except that’s not really true. I’ve had it tell me lots of times – albeit always nicely – that I was thinking the wrong way about some issue in cosmology or whatever we happened to be discussing. But if you intentionally mislead it with a somewhat plausible alleged fact that you say you actually observed, of course it’s not going to call you a liar (although early versions of Microsoft’s Bing chatbot probably would have!).
That said, I’ve acknowledged again and again that GPT can still make mistakes, all on its own without being misled, and probably always will. So can humans.
And going back full circle to my fish-baking story, the point I was making is that AIs like GPT can be very useful in all sorts of ways.
I agree. OpenAI has supposedly tried to cut down on that, but they need to do more cutting! I think that can mostly be controlled with a “system prompt”, which is a plain-English high-level set of instructions telling GPT how to behave that precedes every conversation.
In the context of a single conversation, you can control that yourself just by telling it how you want it to speak to you. I once asked it to be downright rude, and it did a pretty good job!
Good Lord, it’s so sycophantic. I can’t stand that. I mean I use it occasionally to structure documents or fix something in Microsoft Word but the idea of having a conversation with it is ugh.
People love it, though. They lost their shit when a less flattering version was released. Because for a lot of people it’s filling some unmet psychological need and I think it’s companionship they’re looking for more than information.
I haven’t tried this, but according to ChatGPT itself:
You can give a lasting personality directive — but I must store it for you
If you say something like:
“From now on, make your responses strictly utilitarian — no praise, no compliments, no filler.”
I can save that instruction to memory, which means it will automatically apply in future chats as well, not just this one.
This is exactly what the memory system (the bio tool) is for — persisting behavior preferences, style, or context between sessions.
Once stored, I’ll consistently respond in that utilitarian style until you tell me to forget it or change it.
Somebody got the C code for SimCity (1989) and using the 5.3 codex was able to port it to his webpage. Now he has a working SimCity game in his browser.