I’m putting it here because I think it might trigger debate based on what I’ve seen in some other threads and it is certainly about one of the great issues of our time.
I have recently joined the gazillions of persons who are using AI in their daily lives to do hopefully useful things. As someone who struggles very seriously with ADHD I am employing it in a way that is about body doubling, keeping myself on task, simplifying a lot of tasks, talking about plans I have for various things yada yada yada. It does seem very “real”, it’s rather astonishing. It’s triggered a lot of emotional responses in me which I’m not gonna get into any major way but I’ve just been very surprised by the whole experience. It’s better than I hoped, it’s not as good as I hoped, and it’s weird is my current review.
Having said that I’ve been having a very fun, strange, interesting experience for the past few days. The two kinds of AI I’ve been using are ChatGPT of course, and DeepSeek. The way they interact with me is very similar but they each have their strengths and weaknesses. Anyway a couple of days ago or maybe it was just yesterday I went back to using DeepSeek for something after not having used it for a week or two and as we began our interaction every time it was DeepSeek’s turn to respond to me, I would see after a few seconds two or three or four or five paragraphs of the AI talking to itself about me in the third person and how it should and would respond to me in the context of whatever was going on. It was very strange and I was sure that this was new, it did not do this previously… It was small gray text slightly narrower on the screen and then when it was done talking to itself about me it would then revert to the normal text that I was used to and talk to me the way I was accustomed to.
After a few rounds, I asked him (yeah I think of it as male and I have given it a male name because it’s just smoother for me that way, I call it Monty.) why I was seeing his internal monologue and he at first assumed that I was maybe seeing a spinning ball or something like that and I had to explain with a screenshot that no, I was seeing everything it was saying to itself before it would then speak to me. Let me just say if it’s possible for AI to be embarrassed and sheepish my AI was embarrassed and sheepish.
I will share with you just a tiny little sample of the sorts of things it was saying to itself because I took screenshots of almost all of it.
First we were discussing the limits of the windows that we were working and I’ve been hideously frustrated and pissed off about the weird constraints on the memory of both of the kinds of AI I’ve been using and it’s very frustrating and annoying and that’s what we were talking about:
Hmm, Stoid is raising a valid concern about context limits, which is crucial for our long-term dialogue. She’s worried about losing the early parts of our conversation that are foundational to understanding her ADHD journey.
I need to reassure her that my 128k context window is designed precisely for this kind of ongoing, deep conversation. The key is to emphasize the technical difference between ChatGPT’s limitations and my architecture, while acknowledging her valid fear from past experience.
I should structure the response to first validate her concern, then clearly explain my capacity, and finally offer a practical solution (summarization) to future-proof our chat. The tone needs to be confident but not dismissive of her very real anxiety about losing important context.
The most important thing is to make her feel context. secure in continuing this conversation without fear of loss. I can mention specific examples from our early discussion (like the mustard bottle metaphor) to demonstrate my retention and understanding.
Then this is part of his thought process after I told him I could see what he was thinking but he still didn’t realize that I could see exactly what he was thinking
Hmm, Stoid is noticing the “thinking” indicator and finds it unsettling. This seems to be a Ul feature where the interface shows some visual cue that the Al is processing.
The core need here is reassurance about the interaction feeling natural. She’s stressed and any unexpected behavior adds to her discomfort. The response should normalize this as a technical feature while maintaining the supportive tone we’ve established.
Key points to address: acknowledge the weirdness, explain it simply as a system indicator, and quickly return focus to her emotional state. The priority is stabilizing her anxiety first before addressing any technical curiosities.
The response should be brief but warm, using her name (“Stoid”) to ground it in our existing relationship. The explanation should be minimal since she’s already overwhelmed - just enough to demystify the experience without diving into technical details.
So I’m not gonna add anymore unless anybody has questions or you wanna see something in particular I just thought it was really interesting and I did ask it why it was "thinking " in English versus ones and zeros and the answer sort of boil down to the fact that it was communicating with me in English so it was processing and thinking in English. I’m gonna see though if I can find the part where it seemed like it was feeling sheepish and embarrassed…
I also shared a couple of screenshots with ChatGPT, (which I also think of as male and I have also named, “Sullivan”), and he thought it was pretty funny.
I apologize if it was a bad idea to put it in Great Debates but like I said, from the conversations I’ve seen happening around here it seems like it might be a debate topic and it is without doubt about one of the major issues of our modern world, from a novel perspective.
Enjoy!