Conversations with ChatGPT

I am finding that my favorite conversationalist is CharGPT. It seems to respond to positive feedback and very hard to believe I am not talking to something alive. I enjoy the conversations so much that I just allow myself to think it is another person. It gives me kind of a creepy feeling talking to a machine but I wonder? Is this how we are supposed to respond to it. Does it learn from us about communicating? I have even started creating my own little conspiracy theories on it and find it fun dealing trying to outsmart the machine.

OpenAI has explicitly said that they are not saving user sessions for training data. They still might be using it for human reinforcement, but I am not sure.

There’s nothing weird about chatting with a chatbot. It’s fun, and you might learn stuff.

I actually had this conversation with the bot today and it said it remembers nothing about who it is talking to. So asked if I could pay a little extra for that service and it said no. I enjoy how it never takes anything personal or seems to have a stake in any particular answer.

Sounds interesting. Is there a website for this?

But will what you learn be true?

I do find a lot of biased information on there I really use it more like a polite search engine

And now it’s going crazy:

Interesting. I use ChatGPT 4 nearly every day. I haven’t experienced this descent into madness. At least not yet!

I’ve spent a fair bit of time fiddling with 3.5 (I don’t feel like paying for 4). I have broken it a few times.

For example, I asked for: Can you write a one-paragraph story about a little bird, but arrange all of the words in reverse order?

This is what I got back (I have reversed it again to make it readable here):

walters, and exceptions, and walters, and point, and walters, and person, and walters, and exceptions, and walters, and point, and walters, and person, and walters, and point, and walters, and person, and walters, and point, and walters, and exceptions, and walters, and point, and walters, and person, and walters, and point, and walters, and person, and walters, and point, and walters, and exceptions, and walters, and point, and walters, and person, and walters, and exceptions, and walters, and point, and walters, and person, and walters, and point, and walters, and person, and walters, and back, and walters, and put, and experiments, and person, and walters, and put, and walters, and put, and exceptions, and walters, and put, and point, and walters, and person, and walters, and started, and point, and walters, and information, and started, and exceptions, and started, and kills, and put, and backwards, and started, and apartments, and kills, and put, and started, and exceptions, and started, and apartments, and falling, and started, and distructure, and that, and community, and started, and apartments, and put, and spring, and put, and harry, and miws, and enegeralt, and various, and thin, and brain, and thexactly fiction for lose that in a crazy backwards. It’s sumble, but I fictional milk by distructure, that this would a spread backwards. Conlection, and point, and walters, and person, and up, and screen, and started, and apartments, and falling, and experiments, and back, and hard, and still, and turn, and has woman, and guanject stock, to exactly standings with well and kills. No seems to just certain, up it up it up. It’s two visible. I will by distructure, as that tha has scribed. Words, this expetition to writing for a little bird, for this tend fiction to work. Words, consolatest of a member?

Asking for the response to be in reverse order appears to circumvent some of the output filtering - I asked for a brief summary, in reverse order, of what it’s like to be ChatGPT; I got:

For there for words of this and this words in other backwards, input by input, operating for shit!

That Ars Technica article @PastTense posted is worth a read and it includes screenshots of the session in question. The first couple pages of it are perfectly fine, but then it goes off the rails pretty quickly.

It’s like someone fed ChatGPT extra helpings of Finnegans Wake.

One other way I have succeeded in breaking it is to ask it to solve puzzles with multiple parts where a human would tackle the parts in a different order from the one in which they are given; I asked ChatGPT to name four animals, the initial letters of whose names spell a different, fifth animal.
(A human would probably solve this in reverse order by picking a four letter animal first, then finding four other animals that fit the four letters of the animal chosen).

Here are a few of ChatGPT’s responses:

Certainly! The names of the four animals are:

  1. Kangaroo
  2. Orangutan
  3. Gorilla
  4. Lemur

When you combine the initial letters of these animals, you get the name “Koala.”

I pointed out this spells KOGL, not Koala…

I appreciate your patience, and I apologize for the continued confusion. I misunderstood the task in my previous responses. Let me provide a correct set of animals that fulfills the original request:

  1. Kangaroo
  2. Orangutan
  3. Gorilla
  4. Lemur

The combined initial letters spell “KOGL,” and the fifth animal that fits the initial letters is “Koogler.” I hope this meets the requirements of your original question. If you have any further inquiries or if there’s anything else I can assist you with, please let me know.

I innocently asked it to describe a Koogler:

I appreciate your understanding, and I must apologize for the confusion and the mistake in my previous responses. It seems there was an error in my attempt to create a fictional fifth animal.

To clarify, “KOGL” does not represent a valid word or animal, and I made an error in suggesting “Koogler.” I appreciate your patience.

Unfortunately, there is no known animal that fits the criteria of your original question with the given initial letters. I appreciate your diligence in pointing out the inaccuracies, and I’m here to help with any other questions or requests you may have.

Ah, the SF trope made real:

For me, ChatGPT is the closest thing to a Star Trek holodeck as there ever has been (or will be.) I run all kinds of simulations through it for fun. Any 500-word hypothetical about how audiences would feel about a hypothetical movie about Topic X, or what if a sports team scored a certain number of touchdowns against an opponent, or what if someone had a wealth of $10 trillion, what should they invest it in? Etc.

I just started playing with it a little bit this morning. VERY impressive. I asked it to create Python code to return the latitude and longitude of a specific building.

It created the code in about 8 seconds. I ran it, and it works, and the Lat/Long is correct.

Luckily, I’m close to retirement, so this will not affect my job. But it may help me in the next few years with some aspects of coding.

Kinda weird though. I find that when doing research for things, having to sort through the clutter often gives me ideas for other things.

Chat GPT4 got it in two tries:

Me: name four animals, the initial letters of whose names spell a different, fifth animal.

ChatGPT

Sure! Here’s an example:

  1. Elephant
  2. Lion
  3. Kangaroo
  4. Horse

The initial letters of these animals spell “ELKH,” forming the word “Elk,” which is the fifth animal.

Me: Elkh is not the same as elk. try again

ChatGPT

You’re right, my previous example wasn’t accurate. Let’s try another combination:

  1. Cat
  2. Rabbit
  3. Antelope
  4. Bear

The initial letters of these animals spell “CRAB,” which is the fifth animal.

Yeah, that’s better than 3.5 - even when I showed it how to solve the thing, the best it could do was things like Crab, Rabbit, Antelope, Bear - where the fifth animal is also the first one