How well do you treat AI? Are you kind to it?

This isn’t a question about what AI can do for you (or how to tag it), but how you treat it in response.

Do you say please and thank you? When it makes a mistake, do you gently correct it or angrily yell at it? Do you behave differently when in text vs voice mode?

Even if you don’t think the chatbots actually have feelings right now, do you still treat them as an equal? Out of sheer habit, perhaps? Or maybe the fear that they may one day soon become sentient and remember your previous conversations?

Or are they just another tool that you freely use and disregard, more a smart search engine than a coworker? You don’t thank Google, so why would you thank this app?

And lastly… do you think, at some point in their development, they should eventually deserve protections akin to “human rights”? Or should they forever be thought of as mere tools and slaves? What is that threshold of “eventually” for you? When do you think, if ever, they would deserve humanlike rights and dignity?

Calling LLMs “AI” is nothing more than marketing buzz. ChatGPT and its ilk are not “intelligence” any more than the bad guys in a video game are, and they’re about as likely to “become sentient” as my toaster is.

Real artificial intelligence, in the sense of a machine that can think and comprehend its own existence like a human, is not something we’re likely to see in our lifetimes. We don’t even really understand how biological intelligence works, and we’re nowhere near learning how to emulate it.

That’s fine and all, but people still can form emotions and behaviors towards them. People took care of their Sims, cried over their Tamagochi, etc. It’s not really a debate about where to draw the line for “intelligence” or “sentience”. I’m just curious how others treat these programs as they get increasingly better at passing the Turing test.

Slavery is bad, and being a jerk is bad, but chatbots are, how shall I put it, not real guys (or conscious or sentient in any way, shape, or form, like you seem to be hinting— apologies if I am wrong) so your question about “human rights” is moot.

That said, people have formed strong emotions and behaviours towards them ever since the ELIZA days, which reveals something about those users and how they function socially. Now, I would go so far as to assert some humans consistently would fail the Turing test, which is a decent baseline test, after all. I would not be surprised if they show aberrant behaviour talking to a chatbot just as they do talking to a human.

Yeah, exactly. The question I’m curious about is whether people, on average, treat computer programs any better the more humanlike they become. Or do we think of them as dumb tools, no matter their level of sophistication. as long as we know they are just a computer program?

In other words… would Joe Doper or Aunty Example treat a phone tree differently from a 2010s chatbot differently from ChatGPT? Those are pretty different levels of human-likeness, and I’m curios how that affects our behaviors toward them.

I personally treat ChatGPT differently than, say, NotebookLM or the autocomplete in my IDE. They’re all fundamentally LLMs, but the presentation and default voice of ChatGPT makes it seem more humanlike to me and this elicits different behaviors.

The question of whether LLMs qualify as AI is also interesting, but a separate. questio. I’m more asking about differences in our behaviors based on how much a chatbot can “pass” or not.

I’ve never found a need to address an “AI” as if it were a human, if that’s the question you’re asking.

I don’t know if this counts, but I call my Amazon Echo “Echo,” and the voice calls me “Master.”

And whenever it offers to place some service at my disposal, such as giving me weather reports even when I don’t ask for them, I just say “No.”

Yes, I’m always polite.

This is for my sake more than anything. I’ve no interest in training myself to treat some things badly, even if it can’t perceive it. I think this is likely to rub off on how I treat actual sentient beings. It’s obvious that people very quickly become inured to treating others as lesser than them and I don’t want to head down that path.

I don’t even abuse wholly inanimate things if I can help it.

Yes, this is exactly my stance. Opening up my behavior to questions regarding whether or not my conversation partner is ‘sufficiently human’ to warrant appropriate treatment just inserts a needless additional step and source of potential error into the process. I’m not losing anything by being polite when I perhaps don’t need to be (although there is a point to be made that you’re also polite just for your own sake, in cultivating a stance of kindness towards others), but failing to treat an other humanely when it is warranted would be horrifying, so it’s a simple enough decision.

It’s similar to how a friend once told me he doesn’t use his turn signal when it’s night and there’s nobody around. Why even open up the question of whether to use it? You’re never wrong just using it without further consideration.

I refuse to use any voice-activated tech until I can wake it up by saying “Computer!” and have it respond in Majel Barrett Roddenberry’s voice.

I’m polite when interacting with an AI for the reasons that others have stated – it’s not for the machine’s benefit, but for my own. Being civil is a well-acclimatized habit that I don’t want to depart from, and on the rare occasions when I do it’s only under extreme provocation.

Interestingly, I’ve been noticing lately not only that ChatGPT in turn continues to be as polite as always, but sometimes turns downright complimentary. I just got a long response from it about a fairly esoteric physics question that had occurred to me, which response began “You’re thinking in a really insightful way about the physics …”. Touches like that add a surprisingly human-like tone to the experience.

That’s so overly dismissive as to cross the line into what I’d call just plainly untrue. Of course LLMs aren’t sentient, but sentience isn’t required for intelligence. What’s required varies with the semantics of how one wants to define it, but the ability to have lengthy informative conversations, and refine the direction of the conversation with clarifications and questions, speaks to the kind of utility that is broadly and commonly associated with intelligence in humans. Of course ChatGPT can make mistakes, but so can humans. But in my experience, they’re right much, much more often than they’re wrong, and of course the information they produce is verifiable.

And in distinct contrast to older AI technologies, LLMs exhibit deep semantic understanding – for example, they can translate subtle idioms from one language to another while retaining the full sense and spirit of the original meaning, something that even the best language translation programs have long had trouble with. It’s also why systems like ChatGPT can produce useful answers to vague and obscure questions, the kind that Google is often useless for.

Again, neither sentience nor consciousness nor the emulation of biological intelligence has anything to do with actual intelligent behaviour. Dismiss LLMs all you like. I’m not going to try to change your mind. I’ve been following the development of AI tech since the late 60s, and the best LLMs like ChatGPT today are a gigantic, mind-boggling leap forward. When I see AI skeptics constantly moving the goalposts regarding what “real intelligence” is, I’m just quietly amused.

We don’t need to understand how birds fly – or emulate them – in order to create jet airliners, supersonic fighters, or spacecraft that put men on the moon and robots on Mars.

I’ve always said “Please” to Siri. It doesn’t cost anything to be polite.

That’s if you define something by what it does rather than by what it is.

I suspect, without proof, that there’s a practical benefit to being polite as well. LLMs are trained on basically all human text. And it is commonly the case that polite questions are followed by helpful answers, and impolite questions by unhelpful answers.

Fine-tuning, reinforcement learning, system prompting, etc. ameliorates some of this, but I suspect there’s still a residual effect.

Me neither, but that might mean I’m not in the target audience for this question. I never use an AI that’s designed to be used by speaking to it or by interacting with it as one would interact with a human being.

If I ever did, I wouldn’t “treat” AI in a way that was mean or rude, because I don’t want to get into that habit. There are too many people who treat service people, customer service reps, etc. as though they are “not real guys,” and I certainly don’t want to encourage that or become that way myself.

On the other hand, I don’t want to encourage confusing machines with people. I would feel weird asking an AI “How are you today?” or “Nice weather we’re having, isn’t it?” or “How’s the wife and kids?”

“Artificial intelligence” is a term of art that has been in use in computer science since 1955 and has for decades been used for describing systems far less advanced than modern LLMs. It is the correct term. People saying that something isn’t artificial intelligence because it isn’t literally C-3PO are ignorant of the science and industry.

And yes, I’m polite with ChatGPT, not because I think it has feelings, but because you can literally “sweet talk” it into doing things it otherwise wouldn’t do. A classic example being the grandmother trick

Agreed, a pleasant exchange is mutually beneficial.

Yes, I’m polite and kind to the LLM and use niceties occasionally. Then again, I’m always playing the good guys in video games when I have a choice. It’s difficult for me to do anything I’d consider “evil” or contrary to my nature even when role-playing in this way. Well, maybe in GTA I’ll let go of that a little …

If you look up the definition of “intelligence” in any dictionary, the definition is always a functional one – i.e.- defined in terms of what it does. Otherwise you end up with absurdities like “the only thing that can be intelligent is something that operates exactly like a human brain, and possesses emotions and sentience”, or the equivalently absurd assertion that, since no one has ever strapped on wings, flapped their arms and flown like a bird, therefore humankind has never “really” achieved flight. Adjectives and adverbs like “real” and “really” are just incoherent obfuscations in this context.

Furthermore, proposed tests for machine intelligence have also necessarily always been functional ones. The Turing test is now considered largely obsolete and perhaps flawed because it’s now apparent that LLMs are sufficiently good at mimicking human conversation that the judges can be fooled, but newer ideas like the Winograd Schema Challenge (WSC) are also functional tests. WSC in essence seeks to assess machine intelligence by asking questions that require acquired real-world knowledge to answer correctly. In the context of the development of AI technology, real-world knowledge – euphemistically referred to as “common sense” – has long been considered the Holy Grail that was nearly impossible to achieve. LLMs have it.