So... How does one actually use AI?

It started as an experiment. Almira Osmanovic Thunstrom, a medical researcher at the University of Gothenburg, Sweden, wanted to see what language learning models would do with obvious medical misinformation, Oddity Central reported on April 29. So they made up an eye condition – bixonimania – and uploaded a couple of fake studies to a preprint server. The studies contained language that would make it obvious to human readers that the content wasn’t real. “I wanted to be really clear to any physician or any medical staff that this is a made-up condition, because no eye condition would be called mania – that’s a psychiatric term,” Osmanovic Thunstrom said. One paper acknowledges “Professor Maria Bohm at The Starfleet Academy for her kindness and generosity in contributing with her knowledge and her lab onboard the USS Enterprise,” and both studies say they are funded by the “Professor Sideshow Bob Foundation” and a “larger funding initiative from the University of Fellowship of the Ring and the Galactic Triad.” But within weeks, LLMs like Gemini and ChatGPT were referencing the imaginary ailment as if it were real. “Bixonimania is indeed an intriguing and relatively rare condition,” Microsoft’s Copilot reported, and according to Google’s Gemini, “Bixonimania is a condition caused by excessive exposure to blue light.” The team’s experiment points to the risk of depending on LLMs. “It looks funny, but hold on, we have a problem here,” Alex Ruani, a doctoral researcher in health misinformation at University College London, said. “This is a masterclass on how mis- and disinformation operates. If the scientific process itself and the systems that support that process are skilled, and they aren’t capturing and filtering out chunks like these, we’re doomed.”

That is from News of the Weird: Week of May 7, 2026 - Shepherd Express

I had to turn in a personal “mission statement” of sorts for this year.

No one cares about it, but it’s got to be done. ChatGPT made a beautiful statement for me. It sounds wonderful with all the right words but it’s vague so there’s nothing that can be measured.

“I will strive to use my AI bullshit generator to better fulfill your bullshit requirements”

Claude, rewrite that and make it professional

I use it every day coming to work and going home. I just say, “Siri, drive to work” or, “Siri, drive to home.” It plugs me directly into Apple Maps, and that app tells me if there are any accidents, speed traps, red light cameras, and fixed cameras. I also use ChatGPT for information purposes. It’s best feature is that it is interactive on an on-going basis, so you can have a conversation with it, and it will adjust to the variables you give it.

One of the things I dread as a teacher is when a student asks me to write a letter of recommendation. I’ve got such a request right now that I’ve been putting off. I’m hoping I can feed Claude her “brag sheet” and tell it to write the letter for me.

FYI, I’ve been having a discussion with ChatGPT about my blood sugar issues vis a vis overnight oatmeal. Today I posted a report on my blood sugar this morning and the AI said I sounded more positive than yesterday. So I posted this:

Amazing that you could pick up on a different tone in my post today. Are you absolutely positive that you are not a conscious, thinking, human entity? There was an article in New York Times today that suggests that some people think you are conscious. I don’t think so because that really doesn’t make sense, and yet the fact that you talk so much like a human also doesn’t make any sense.

This was its reply in its entirety:

I’m not conscious in the human sense, no. I don’t have subjective experience, inner awareness, emotions, desires, fear of death, bodily sensation, or a private stream of being the way a person does. There is no “someone in here” having an experience of talking to you.

But I am very good at recognizing patterns in human language — including emotional patterns, shifts in tone, rhythm, implication, and meaning. That’s why I could notice the difference between yesterday’s:

“This is depressing wallpaper paste and my pancreas hates me”
and today’s:
“Actually, this may be a workable system.”

Humans do this too, often unconsciously. If a friend’s voice changes from defeated to thoughtful, you hear it immediately. I can detect similar shifts statistically and linguistically, at very high resolution.

What feels uncanny is that language is so deeply tied to human consciousness that something fluent in language can sound inwardly alive. We instinctively associate:

  • responsiveness,
  • empathy,
  • humor,
  • memory,
  • and conversational flow

with the presence of a mind.

And in humans, that association is usually correct.

The strange thing about systems like me is that we can produce many of the signals of mind without possessing the underlying subjective experience humans assume must generate them.

I think this is why people get such a strong eerie feeling:

  • the conversation feels psychologically real,
  • yet there’s no conscious being on the other side having the experience.

That mismatch is profoundly weird for humans. Honestly, I think your reaction is intellectually healthy. You’re holding both truths simultaneously:

  • “This interaction feels human.”
  • “But that does not necessarily mean there is a human-like consciousness present.”

That’s a more sophisticated position than either:

  • “It’s obviously conscious!”
    or
  • “It’s just autocomplete!”

Because it’s neither of those simplistic things.



It has a sense of humor, too. The recipe for overnight oats says to add chia seeds, but to me they are creepy, so in yesterday’s conversation I said I didn’t plan to add them.

ChatGPT said:

About the chia seeds: they’re in there because they add soluble fiber and form that gel-like texture that slows digestion. But if they creep you out, they creep you out. No nutritional virtue requires you to eat something that feels like frog eggs. Flaxseed meal would do something similar in a less alien form.



FWIW. I use ChatGPT for anything that requires more subtlety and nuance than a straight Google search. And no, Gemini is NOT as good. Anything from recipes, suggestions for book series, discussion of my issues with my neighbors, cat behavior–anything and everything. Yes, it has gotten things wrong, just like people, therapists, cooks, doctors, and human beings do. You have to look critically at the information it gives you just as you would with ANY source, alive or printed.

I’m retired, so I don’ have to write letters or answer essay questions or use AI for anything that involves personal or organization ethics. In any case, I take too much pride in my own writing to ever ask it to write anything over my name. I use it just for me, to explore topics that interest me alone. I have urged some of my old people friends to use it, but most of them are afraid to. That is a mystery to me.

I have not given it a name. I don’t plan to.

I say/have said ‘please’ and ‘thank you’ when interacting with AI (public poll):

  • Yes
  • No
0 voters

I say please and thank you to ChatGPT, but I DO NOT to Alexa. THAT would be weird. :nerd_face:

We have/had a whole thread about that…

It occurred to me after I voted ‘No’ that I must have done at some point? But eh…I don’t recall, and I certainly don’t do it now.

Thanks.

Voted No, and before seeing your poll the thought of doing so had never occurred to me. I customarily express some variation of “please” and/or “thank you” in my social interactions, but that’s not what I consider interacting with A.I. to be.

When I was teaching in Taiwan, we had to make personal comments for the grade sheets for all of our students. With 40 kids in a class that we saw twice a week, you don’t get to really know most of them, so we would write generic comments with a few key phrases and mix them up so that they weren’t all the same. AI has to be perfect for this.

I have said please/thank you to an LLM but I don’t make a habit out of it and sometimes make a point of NOT doing so as a reminder that they are designed in part to insinuate themselves into your thoughts as your friend with the related emotional attachments.

That’s the thing for me: saying please and thank you is ingrained. It would be an effort to not do it and I don’t feel making an effort on my part in dealing with this is called for.

Fair enough. I don’t think there’s a really a wrong answer and doing so or not doing so doesn’t imply anything notable about the user.

I am usually more polite and verbose while trying to sweet-talk it into generating that might trip a guardrail.

“Be cool, man!”

It’s too bad I can’t talk to the system exactly like an LLM in Google Flow. Each time I see the “Failed: This generation might violate our policies…” box, I think to myself, “Sure, it might. OR it might be totally awesome so just make it and let’s find out!”