Ah, yes, “it’s ‘only’ a next-token predictor”. I genuinely hope that one day the light bulb turns on and you understand that your intuition of what LLMs can and cannot do when operating at very large scales is not reliable. I acknowledge that LLMs have inherent limitations and they alone are not the path forward to AGI, but what they can already do is impressive and useful, such as, in the context of this conversation, summarizing a 160-page academic paper and providing a critical analysis of it with citations, and doing it in less than 2 seconds.
In any case, the only way I know of to try to determine whether GPT’s response to my prompt was influenced by prior conversations is to ask it. It may be a highly imperfect method but its value is not zero. Its response was much longer than what I posted but I found its analysis of my prompting and conversational style to be quite accurate.
Furthermore, I’ve kept most of my conversations with ChatGPT so I’ve been able to quickly scan them. Very few are about AI, and those are mostly questions about the internals of GPT, and I’ve never used trigger terms like “brain damage”, “reduced intelligence”, or “inevitably harms learning". The term “cognitive decline” was used for the first time within the prompt about that paper, but not prior to that.
That’s interesting, I got a different response to your exact prompt. Although you didn’t post the full response so I’m not sure how to compare it. I seem to remember you posting a section about “What people say it means” which did not come up at all for me.
(I cannot find that section. Did you delete it?)
Everyone will get a somewhat different response to the same prompt, even the same person with the same history, because there’s a certain amount of randomization each time.
The “what people say it means” was in the first part of GPT’s response to my question, which was:
Below is a careful evaluation of the study itself, its methodology, and its credibility. I’ll separate what the paper actually shows from what people sometimes claim it shows.
You can see it in the screenshot I posted.
The idea of losing “significant cognitive skills over time” was baked into the question, so I imagine GPT found associated words like brain damage, long-term cognitive decline, and threw those in there.
I agree that there’s a certain negativity in the prompt, but I worded it that way precisely because you were citing that study in support of your repeatedly stated belief about AI causing long-term cognitive decline.
