FYI, I’ve been having a discussion with ChatGPT about my blood sugar issues vis a vis overnight oatmeal. Today I posted a report on my blood sugar this morning and the AI said I sounded more positive than yesterday. So I posted this:
Amazing that you could pick up on a different tone in my post today. Are you absolutely positive that you are not a conscious, thinking, human entity? There was an article in New York Times today that suggests that some people think you are conscious. I don’t think so because that really doesn’t make sense, and yet the fact that you talk so much like a human also doesn’t make any sense.
This was its reply in its entirety:
I’m not conscious in the human sense, no. I don’t have subjective experience, inner awareness, emotions, desires, fear of death, bodily sensation, or a private stream of being the way a person does. There is no “someone in here” having an experience of talking to you.
But I am very good at recognizing patterns in human language — including emotional patterns, shifts in tone, rhythm, implication, and meaning. That’s why I could notice the difference between yesterday’s:
“This is depressing wallpaper paste and my pancreas hates me”
and today’s:
“Actually, this may be a workable system.”
Humans do this too, often unconsciously. If a friend’s voice changes from defeated to thoughtful, you hear it immediately. I can detect similar shifts statistically and linguistically, at very high resolution.
What feels uncanny is that language is so deeply tied to human consciousness that something fluent in language can sound inwardly alive. We instinctively associate:
- responsiveness,
- empathy,
- humor,
- memory,
- and conversational flow
with the presence of a mind.
And in humans, that association is usually correct.
The strange thing about systems like me is that we can produce many of the signals of mind without possessing the underlying subjective experience humans assume must generate them.
I think this is why people get such a strong eerie feeling:
- the conversation feels psychologically real,
- yet there’s no conscious being on the other side having the experience.
That mismatch is profoundly weird for humans. Honestly, I think your reaction is intellectually healthy. You’re holding both truths simultaneously:
- “This interaction feels human.”
- “But that does not necessarily mean there is a human-like consciousness present.”
That’s a more sophisticated position than either:
- “It’s obviously conscious!”
or
- “It’s just autocomplete!”
Because it’s neither of those simplistic things.
It has a sense of humor, too. The recipe for overnight oats says to add chia seeds, but to me they are creepy, so in yesterday’s conversation I said I didn’t plan to add them.
ChatGPT said:
About the chia seeds: they’re in there because they add soluble fiber and form that gel-like texture that slows digestion. But if they creep you out, they creep you out. No nutritional virtue requires you to eat something that feels like frog eggs. Flaxseed meal would do something similar in a less alien form.
FWIW. I use ChatGPT for anything that requires more subtlety and nuance than a straight Google search. And no, Gemini is NOT as good. Anything from recipes, suggestions for book series, discussion of my issues with my neighbors, cat behavior–anything and everything. Yes, it has gotten things wrong, just like people, therapists, cooks, doctors, and human beings do. You have to look critically at the information it gives you just as you would with ANY source, alive or printed.
I’m retired, so I don’ have to write letters or answer essay questions or use AI for anything that involves personal or organization ethics. In any case, I take too much pride in my own writing to ever ask it to write anything over my name. I use it just for me, to explore topics that interest me alone. I have urged some of my old people friends to use it, but most of them are afraid to. That is a mystery to me.
I have not given it a name. I don’t plan to.