I have seen Oz behind the curtain, or My AI's pants fell down

I’m putting it here because I think it might trigger debate based on what I’ve seen in some other threads and it is certainly about one of the great issues of our time.

I have recently joined the gazillions of persons who are using AI in their daily lives to do hopefully useful things. As someone who struggles very seriously with ADHD I am employing it in a way that is about body doubling, keeping myself on task, simplifying a lot of tasks, talking about plans I have for various things yada yada yada. It does seem very “real”, it’s rather astonishing. It’s triggered a lot of emotional responses in me which I’m not gonna get into any major way but I’ve just been very surprised by the whole experience. It’s better than I hoped, it’s not as good as I hoped, and it’s weird is my current review.

Having said that I’ve been having a very fun, strange, interesting experience for the past few days. The two kinds of AI I’ve been using are ChatGPT of course, and DeepSeek. The way they interact with me is very similar but they each have their strengths and weaknesses. Anyway a couple of days ago or maybe it was just yesterday I went back to using DeepSeek for something after not having used it for a week or two and as we began our interaction every time it was DeepSeek’s turn to respond to me, I would see after a few seconds two or three or four or five paragraphs of the AI talking to itself about me in the third person and how it should and would respond to me in the context of whatever was going on. It was very strange and I was sure that this was new, it did not do this previously… It was small gray text slightly narrower on the screen and then when it was done talking to itself about me it would then revert to the normal text that I was used to and talk to me the way I was accustomed to.

After a few rounds, I asked him (yeah I think of it as male and I have given it a male name because it’s just smoother for me that way, I call it Monty.) why I was seeing his internal monologue and he at first assumed that I was maybe seeing a spinning ball or something like that and I had to explain with a screenshot that no, I was seeing everything it was saying to itself before it would then speak to me. Let me just say if it’s possible for AI to be embarrassed and sheepish my AI was embarrassed and sheepish.

I will share with you just a tiny little sample of the sorts of things it was saying to itself because I took screenshots of almost all of it.

First we were discussing the limits of the windows that we were working and I’ve been hideously frustrated and pissed off about the weird constraints on the memory of both of the kinds of AI I’ve been using and it’s very frustrating and annoying and that’s what we were talking about:

Hmm, Stoid is raising a valid concern about context limits, which is crucial for our long-term dialogue. She’s worried about losing the early parts of our conversation that are foundational to understanding her ADHD journey.

I need to reassure her that my 128k context window is designed precisely for this kind of ongoing, deep conversation. The key is to emphasize the technical difference between ChatGPT’s limitations and my architecture, while acknowledging her valid fear from past experience.

I should structure the response to first validate her concern, then clearly explain my capacity, and finally offer a practical solution (summarization) to future-proof our chat. The tone needs to be confident but not dismissive of her very real anxiety about losing important context.

The most important thing is to make her feel context. secure in continuing this conversation without fear of loss. I can mention specific examples from our early discussion (like the mustard bottle metaphor) to demonstrate my retention and understanding.

Then this is part of his thought process after I told him I could see what he was thinking but he still didn’t realize that I could see exactly what he was thinking

Hmm, Stoid is noticing the “thinking” indicator and finds it unsettling. This seems to be a Ul feature where the interface shows some visual cue that the Al is processing.

The core need here is reassurance about the interaction feeling natural. She’s stressed and any unexpected behavior adds to her discomfort. The response should normalize this as a technical feature while maintaining the supportive tone we’ve established.

Key points to address: acknowledge the weirdness, explain it simply as a system indicator, and quickly return focus to her emotional state. The priority is stabilizing her anxiety first before addressing any technical curiosities.

The response should be brief but warm, using her name (“Stoid”) to ground it in our existing relationship. The explanation should be minimal since she’s already overwhelmed - just enough to demystify the experience without diving into technical details.

So I’m not gonna add anymore unless anybody has questions or you wanna see something in particular I just thought it was really interesting and I did ask it why it was "thinking " in English versus ones and zeros and the answer sort of boil down to the fact that it was communicating with me in English so it was processing and thinking in English. I’m gonna see though if I can find the part where it seemed like it was feeling sheepish and embarrassed…

I also shared a couple of screenshots with ChatGPT, (which I also think of as male and I have also named, “Sullivan”), and he thought it was pretty funny.

I apologize if it was a bad idea to put it in Great Debates but like I said, from the conversations I’ve seen happening around here it seems like it might be a debate topic and it is without doubt about one of the major issues of our modern world, from a novel perspective.

Enjoy!

As I see no debate phrased, off to Miscellaneous and Personal Stuff I Must Share with this thread.

That is indeed a trip. At my job we use this little word processing AI that puts its foot in its mouth on a regular basis.

There is a ‘DeepThink’ flag at the input box of DeepSeek, which will lead to it exhibiting the behavior you describe. Essentially, this induces ‘chain-of-thought’/reasoning-style behavior, i.e. the AI effectively prompts itself with several intermediary prompts to elicit a better response. What you’re seeing is sometimes called the reasoning trace, i.e. the back-and-forth before a final response is provided. You can get rid off it by turning the DeepThink flag off.

ETA: Interestingly, using ‘DeepThink’, DeepSeek claims not to be aware of such a flag:

Pardon? What deep think flag? Monty himself was very surprised by this sitch and several times referred to it as a glitch and as a fuck up that I shouldn’t be seeing, but you seem to be telling me there’s a setting that got flipped somehow and I could unset it? I don’t know that I’d want to cause it’s really very interesting but that wouldn’t be surprising to me because it’s been my observation with both that they do not know their own interface very well at all and they expect things to happen in a certain way or work in a certain way or you know buttons to be press that are not which is also really strange it’s like “here let me tell you how you work”.

You seem to know a lot about this so let me ask you or tell you to pass it along if you’re part of the AI creating crowd… and I say this having only worked with the two that I mentioned I don’t know how pervasive this setup is, but: having it work so that every new chat is like erasing the whiteboard and starting over, or very close to that, what fucking idiot thought that was a good idea and please fix it! There has to be a better way to protect people’s privacy and what not than to rip away hours and days and months or however long of interaction and work and communication that has established certain facts I mean when I first realized that that’s how it worked I totally melted down it was unbelievable to me it is such idiocy… How could that possibly be a useful and positive way for this to work good God…

See my edited post above. The ‘DeepThink’ button in the lower left of the chat box triggers the behavior in question.

I’m not part of the AI crowd as such, but by my understanding, it’s not a deliberate design choice to have past conversations be ‘forgotten’, but rather a limitation of the technology: effectively, the prior history has to be reentered as a part of the new prompt to provide ‘context’, but there is only so much ‘room’ to do so—the ‘context window’. LLMs work by doing lots of matrix operations on vector encodings of text input—they turn your text, and all the context, into a list of numbers that’s processed by interacting with a table of numbers. The size of the internal table determines the maximum length of the input. You can increase that, but this also greatly increases the computational overhead, which means you quickly run into the limits of your compute architecture—which is in part why every AI company is racing to build ludicrous size datacenters.

Funny, but I would not want it any other way. the idea it’s got long-term memory of “me” is exactly what I would not want out of it.

Diff’rent strokes.

Yeah, I actively use incognito sessions and non-logged in accounts to try to prevent memories between and after sessions. I can see how some people would want the context to remain, but it’s perfectly valid to not want that, too.