How well do you treat AI? Are you kind to it?

I speak to Generative AI as if I were speaking to another person. In most cases that’s polite.

For example, I use ChatGPT to help prepare outlines and descriptions for YouTube videos. A month or so back I was remembering my teenage experience with ham radio: the process I went through to earn my license, the Christmas day when my parents gave my own ham radio, and that fateful day when Mom shut down the station in a kind way (Here’s a computer! The radio goes.) in response to growing grumbling from the neighbors about TV interference.
It’s a humorous and fun story to tell and will make a neat YouTube video, but I wanted to better flesh it out and organize my thoughts, so I had some long conversations with ChatGPT 4o, where I explained various bits, and it filled in so much that I hadn’t thought of. Occasionally it would say something like (regarding installation of the antenna and radio) “Your dad and his work buddy had all of the radio knowledge and…” and I would say “Actually, Dad’s radio knowledge was limited, his friend was the one who did all of the work”, and ChatGPT would rewrite the paragraph, exactly as if I had a personal secretary helping me.

And it is not just a bot. Generative AI is far beyond being just a bot. I simply mentioned in my ham radio discussion that we go the radio set up and I was all ready for my first broadcast, but the day just happened to be Field Day–that means nothing to 99% of the readers here, but ChatGPT immediately understood, knowing that Field Day is a yearly ham radio competition with many aspects including trying to make as many distant contacts as possible throughout the day–speed dating for hams–make contact, exchange info, move on.
ChatGPT discussed how intimidating it must have been for a noob 13-year-old ham to dive into the absolute maelstrom that was the airwaves on Field Day, even discussing the various emotions that I probably felt.

I don’t use the output directly, but it is absolutely a game changer for getting things in order and fleshing out details.

At work, I use Github Copilot frequently and am amazed at how I can simply say what I want, and Generative AI makes it happen. “Make a new web page. Show me a list of network test output files to choose from; you’ll find them in this directory. When the user clicks on one, then show the details blow. Here’s a sample of the data. Notice how each line is a different kind of record, but the are all tied together with a test identifier. Grab the record that says “header” and use that to display a nice header, now format the rest of the network data below that.”
Son of a gun, it understands all of that and writes me all of the code I need to do exactly that, in whatever language the project is written in.

To answer the OP, let me describe one feature of Github Copilot: an organization can decide if they want to prevent Generative AI from creating code that matches public code. I am not sure why, but the choice is there, and my organization has turned this feature on.

As I am coding, it occasionally gets in a rut where code it generates matches public code. I watch as the code starts appearing on the screen…fifty lines or so…looking good…and then it’s all whisked away and a message says “The output matched public code. Please reword your prompt”.

I don’t curse at Copilot, but it does get heated.
“Do not use public code. Add a feature like X that does Y…and so on”
then
“I said DO NOT USE PUBLIC CODE. This is not difficult. You can write it all on your own. Please add a feature like X that does Y and blah blah blah.”

But that’s as harsh as it will get. It is exactly like working with a junior programmer who I send off to do some work and I calmly point out some mistakes and the go off and fix them.
This is an amazing tool.

I think that Smapti is overly dismissive. We have passed a knee in a curve. We are not talking about fighting Siri to get her to play “the original Broadway cast version of Rent, not the album”. We are not talking about “Your call is important to us.”
We are talking about true intelligence that creates new content.

Last week I subscribed to ChatGPT to get access to more time and features. I realized that it is a turning point when I add a $20 monthly subscription for AI to my other monthly subscriptions such as Internet, YouTube, and Cellular.

I’m nice to AI for the same reason I’m nice to dogs.

I don’t actually know anything about a dog’s internal life of thoughts. For all I know there’s nothing there, and they’re just machines designed to work my emotions to get treats and attention. But they do a good enough job imitating the basics of human social and emotional interaction that it feels wrong to treat them badly. So it’s not that I feel guilty about mistreating the machine, but there’s a gut principle that human-seeming things ought to be treated with humanity, and it feels bad for me to violate what, whether I know it’s a real mind or not.

An LLM doesn’t show quite as much emotional range as a dog. I would rate them somewhere between a tamed parrot and a tech support employee. I don’t think abusing an LLM causes actual harm, but they mirror humanity enough that I feel I’m harming my own humanity by abusing a human-like thing. So I although I don’t go overboard with the pleasantries, I do avoid rudeness entirely with an LLM.

More broadly than that, I habitually sprinkle my tone with softeners like “would you mind” and “sorry, I misspoke”. The LLM is trained on a corpus of human text, which ought to reflect that politeness words lead to more complete and correct responses, so I have a suspicion that I get better responses if I use at least a minimally respectful cooperative tone.

So in short, I’m nice to LLMs so as not to offend my own sense of humanity, and to conform to my own social habits, and because I suspect it improves the quality of the responses.

As others mentioned, part of the benefit/gimmick to LLMs is that you can use natural language as you would a person. So I do so, in part because it’s just more fun that way.

“Can you please give me ten ideas of birthday gifts for goldfish?”
Sure, here’s ten ideas for gifts a goldfish might enjoy…
“Thanks, are you sure that a magnetic chess set is appropriate for a fish?”

I suppose I could just say “List ten gifts for goldfish” but nah.

While I don’t worry about hurting its feelings and don’t have any specific emotional attachment, people have been giving inanimate objects “emotions” since the dawn of time. If people can say their car, boat or sewing machine is “happy”, someone’s gonna feel it about a LLM.

You can also say “thank you” to your coffee machine when it finishes brewing your coffee because, hey, it doesn’t cost you anything. But most people would find that ridiculous. The fact is that AI does not have feelings, and there is no more reason to be polite to it as there is to say thank you to your alarm clock after it satisfactorily wakes you up in the morning. It is a tool and nothing more.

But if I get into the habit of talking rudely to machines, it may carry over to talking rudely to people. It’s like making sure a gun is always unloaded: good procedure.

It’s for indemnification purposes. Sometimes Copilot’s results are a little too good at mimicking business context as opposed to just the code structure, and it can create an overwhelming impression that the code was copied from an external source. Due to the viral nature of GPL licenses, this can create a huge risk of being compelled to open-source any derivative works.

Companies will risk paying damages for accidentally taking closed-source proprietary code, in fact I think certain tiers of Copilot will indemnify against that. But mandatory public disclosure of your own proprietery code is potentially catastrophic.

I mean maybe. I’m not necessarily convinced that is likely at the current state of AI, but it is certainly possible I suppose, and probably becomes more of a risk as AI becomes more sophisticated and the line between AI and humanity becomes even more blurred (e.g. humanoid robots, etc.).

Exactly. To quote Kurt Vonnegut, “We are what we pretend to be, so we must be careful about what we pretend to be.” The more you mistreat human-like things, the more likely you are to mistreat actual humans (including yourself), so it’s a good practice to be consistent in treating human-like things well.

I’m not disagreeing, but as I tried to better explain in my post above, I’m not sure AI, in its current state, is “human-like” enough for this to be a risk. Maybe in the future.

For me, it’s close enough that erring on the side of caution feels more comfortable than being a jerk.

Again, not out of concern of how the AI feels about it, but out of concern how it makes me feel, and how it affects my behavior toward people.

I would love for AI to have a feature like “hey, you’re being kind of a dick lately, is everything OK?”

That’s fair, and let me clarify that I’m not advocating it’s fine to be a jerk to AI, that would be weird too. I just treat it like any other tool, in a neutral, utilitarian manner without feeling the need for pleasantries, compliments, or acknowledgements.
.

It occurred to me the other day that kids will have to be taught in school how to work with Generative AI in order to survive in the world.
This conjures up an image of tween and teen boys (and girls too, I suspect) typing all kinds of crazy things in their prompts, ranging from the risqué to the disgusting to the completely illegal, and everything in between.

I am curious about how the various GPTs respond to an onslaught of unfettered teenagers doing their worst.

I’d rather not start typing in nasty stuff just to find out.

With some version of Sorry, but as a Large Language Model, I am prohibited from discussing the best ways seduce a goat while making nuclear bombs in your basement. If you would like information about goats in literature or nuclear weapon accords between nations…

I have Alexa but recently unplugged it due to the new terms of service. I started out polite, saying Please and Thank You. At some point, though, when Alexa responded in exactly the same way to them, it felt weird and unnatural, so I stopped. This was years ago.

When I first started using ChatGPT, I was also polite, using Please and Thank Yous. At some point, I read an article that not doing that gives better results. It wasn’t to be rude but to use an example as above. “Describe for me how to setup local network file sharing.” “Create an example of an elven feast in the Forgotten Realms.” Phrasing such as that.

I think I try and be mindful of how I’m phrasing things but firm in asking for what I want.

I have been paying for ChatGPT for a while now and I am wondering if it’s time to move on. The last several times I gave ChatGPT 4.5 a prompt, it returned nothing. It was the prompt above about an elven feast. It was a bit longer, asking for it to act like a poet or scholar, and to describe the setting, furniture, and utensils used. After thinking about it, it came back blank. I tried Gemini and the same prompt returned something.

I don’t know where this puts me with regards to the OP but that’s been my journey so far.

Thanks for the discussion!

This morning I am inclined to get a little more salty.

For the third time I have had to type this:
“Please be very careful to rewrite without using public code. Try again.”

Then after another failure,
“You continue generating hits on public code. Please try again.”

This time it worked. But I’m getting tired of telling it to work around its own hall monitor.

For me, being rude takes effort.

LLM isn’t worth that effort.

Beyond that, I can’t picture what “unkind” could mean.

That sounds like some sort of error. You mean like literally wrote nothing? That’s not supposed to happen. I’ve never not gotten a response out of Chat GPT or similar LLMs in the hundreds of hours I’ve put in on them, except when an exception was thrown at me (Server full, an error occured try again later, or something to that effect.)

Yes, nothing. It would pop to the top, think for a bit, then fail. This happened after a refresh and once a new prompt. I have used it for months and this started this past month. I will keep trying.

Thanks for the discussion!

Yeah, I’ve found it’s been timing out much more than usual in the past two weeks.

Well, darn, I can’t embed the picture.

That is what happens. I don’t think my prompt was that complex.

Thanks for the discussion!