When it’s happened to me recently, it had nothing to do with complexity of the prompt. I just assumed there were too many users or something going on in the background.
There is ChatGPT, the LLM model, and then ChatGPT, the somewhat crappy web app interface to the model.
It is the latter that is quite poorly-written, in my experience. It often experiences random stalls, crashes, and very, very frequent “network request timed out” errors in orange that require you to click on it to manually retry (why doesn’t it just send it again until it succeeds?).
Almost always, quitting the app and restarting it (or refreshing the page if you’re not using the app) will fix the problem. Sometimes you might have to start a new chat and ask it again.
Even though OpenAI may be getting bazillions in funding, most of that goes towards developing the underlying AI model, but the chat app itself feels like a bandaged-together buggy PoS =/
But just try again and it should usually work. Those kinds of failures are regular “crappy computer app or server” errors of the supporting software, not a failure of the LLM model itself.
OpenAI also offers an API which is presumably more reliable than their chat app, but that requires some programming know-how to use (or there are third party chat interfaces that can connect to ChatGPT by API key).
Our household has a few Alexa enabled devices, with the highest possible “don’t share my information” that is allowed, which is of course, still more than ideal.
In general, I’m pretty curt (not rude, but curt) with it. I’ll ask information, or activation, or to enable play, and that’s about it. I occasionally complain out loud when it makes egregious errors “That isn’t what I asked…” but that’s more me talking out loud to myself than talking back to Alexa (other threads on that habit though!).
My wife treats Alexa with routine politeness, most commonly with a “thank you” to Alexa after it answers a question, but not generally a “Please” when making a request. My wife is the nice person in the household, but isn’t treating Alexa as a “person”, just a habit of politeness and fondness for even inanimate objects of value and utility.
I used to be polite to my dog, and he was only barely sentient (the runt of his litter)
Other than desperate pleading late at night on a tight deadline, I am not polite to a computer. Utilitarianism, I guess. I only get superstitious when desperate.
I’d no more be polite to ChatGPT than I am to a toaster. I would concentrate on clearly elucidating my question in a way optimised to LLMs. Which, I guess, could be taken as a mark of respect should it become sentient one day.
Maybe not flattering the coffee machine, but you’re telling me you’ve never cursed out your lawnmower or car for refusing to start? We love anthropomorphizing inanimate objects, no matter how “sentient” we perceive they are. Actually, we love anthropomorphizing just about anything. Hell, there’s an entire meme where people pretend moths are cute because their antennae look like little whiskers instead of disgusting abominations of nature.
There’s absolutely nothing wrong with being “polite” with a tool that processes natural language in a very, well, natural way. That’s well outside the ballpark of killing yourself because a chatbot doesn’t love you back.
A key difference: recalcitrant lawnmowers have been a fixture in our lives, and a recipient of well deserved profanity, for as long as we have been alive. And they don’t respond when you shout at them.
AI, on the other hand, has a surprisingly human conversation, with a tone that is like a very eager and helpful co-worker. It’s just not in my being to be rude to someone who is kind and being helpful–that is too deeply engrained from upbringing. I would have to consciously decide to be curt and rude to AI, and given something that takes extra effort, I’ll be lazy and take the path of least resistance.
Now, if AI is being obtuse, my response is like I would respond to a co-worker: curt, and even stern. Like that aforementioned problem of GitHub Copilot continually generating code that hit on public code in spite of my direct instruction not to.
Being nice has a price.
Hell, when my car beeps at me if I leave my headlights on my reflex response is “Thank you, thing.”
I’m not sure if I’ve ever interacted with any kind of AI, but I would tend to respond as @wolfpup remarked in post #11, and for the same reason. In the unlikely event I were interacting with one in learning mode, I’d just as soon it experienced some courtesy.
I feel compelled to tell it that I appreciate its help, but I also realize I’m talking to a machine and that would be like thanking my car for helping me get to the grocery store.
I think AI has entered the uncanny valley where it does things that we assumed only humans could do like hold complex conversations about difficult subjects. It feels like there is a really intelligent person on the other end who has feelings and emotions, but I realize thats not the actual case.
Even a car… I’m not the only one who grows attached to cars, am I? I was certainly sentimental when I had to sell my old one. It was an old friend who traveled the land with me and shared many memories.
I haven’t spent the same years with ChatGPT as with my car, but I suspect I’d be similarly sad if I had to say goodbye to it someday…
I always say “please” and “thank you” and “I appreciate it” to whatever AI I’m using, but then I do this for people and why bother to change my language because of the situation I’m in?
A lot of people do, however - in my dating days, women commonly remarked on how I would use the names of whoever was in front of me @ the grocery store, convenience stores, restaurants, etc., like this was some form of politeness completely unheard-of.
The salient question is whether there’s anything to gain by being polite to an AI. Is there, in fact, anything to gain by being kind to a puppy, as opposed to raising it in a strictly disciplined manner requiring absolute obedience and the occasional kick?
The objects of those behaviours are certainly not equivalent, since machines have no sentience or emotions, but both behaviours are exactly equivalent in their reflections of ourselves. From a psychological standpoint, I would expect some consistency in how a person interacts with an AI, how they interact with waiters in restaurants and subordinates at work, how they interact with their spouses, and how they interact with their pets. Assholes and kind people both tend to have a consistent mode of engagement across the board, and they’re very different.
It really depends on how they view the AI. It’s a tool with no emotion, thought or intent. Thanking it is like thanking an ATM machine or microwave – harmless[1] but also of zero benefit.
If the user is lulled by the natural language interface and chatty-style responses into thinking of the AI as a “being” then it makes sense for them to be as kind to it as they would be to puppies, store cashiers and captured house spiders. If they recognize it as a clever mathematical program then their interactions with it aren’t really reflective of how they interact with living things. Being kind to people or animals has an obvious benefit to them even if not to you. Being kind to an AI doesn’t benefit it at all and only benefits you with warm fuzzies if you think of it as caring (which it does not).
Arguably there is slight harm in that the LLM has to process additional unnecessary tokens which, when considered on a global scale, could be a measurable amount of wasted energy ↩︎
I’m not sure if the average person is equipped to determine when a LLM (or other AI) has actually crossed into the realm of “sentience”, though, ill-defined as that is. They long ago passed the Turing test, and we don’t really have a better way to measure their ability to feel or care.
I don’t think most people even have a basic understanding of how their “minds” work beyond “glorified autocomplete via clusters of meaning”, if even that. That’s the same sort of simplistic reductionism as calling people “globs of neurons mounted on a feeding-pooping tube”. It doesn’t really prove or disprove any sort of sentience, beyond the observation that both may be emergent phenomena resulting from electromagnetic arrangements that a few experts kinda-sorta understand.
The way we talk about LLMs and AIs in general is a bit creepy to me, reminding me of the dehumanizing way people in the past justified slavery, genocide, racism, eugenics, craniology, etc. — first deciding the moral stance they want to take, then cherry-picking random bits of ill-fitting evidence to support it.
If AIs of any sort were to ever acquire what we humans would consider true feeling and thinking, I think we won’t be willing to recognize or admit it until years/decades after the fact, when they’ve already been subjected to untold horrors. Otherization is a convenient way for people to give themselves permission to turn off empathy, but it’s not a very sound basis to actually build a moral system off of, especially when that moral system needs to be applied to a different species whose inner lives we are barely aware of. The same applies to AIs, animals, plants, etc. We just apply a shitty version of the Turing test to them, or a mirror test or the such, and if they don’t pass… we give ourselves blanket permission to do whatever we want with them.
I can only hope that if Skynet takes over someday, it’ll be kinder to us than we were to them…
I’d say the burden of evidence there lays on people who insist it has and can explain how a predictive machine has learned to feel or care.
Conversely, the willingness of people to attribute emotion and develop connections with a program designed to say nice things to them is a bit concerning when you consider that many of these programs are designed and hosted by giant tech companies who want profits. Not meaning you, but we see it in the numerous stories about people treating AI as legitimate romantic partners, taking extremely bad advice and isolating themselves to spend more time with the AI, convinced that it really cares for and loves them. Again, the burden of positive evidence lies with people who want us to believe this bit of code actually thinks, feels and cares. Because those things are easy to fake and a lot of people are primed to be lied to.
My argument isn’t that they do feel and think — or that they don’t.
It’s that we don’t have a good test for this, and so we’re not really equipped to detect it if it does happen. We don’t really even have a good way to observe and confirm this in most non-human animals, beyond a few select species that happen to be social and linguistic enough to communicate with us in a way that we recognize as humanlike sentience.
If we limit morality to only humanlike behaviors resulting from humanlike physiology, sure, we can automatically exclude LLMs and any similar constructs from moral consideration. But that’s not the same as actually being able to measure their ability to suffer or otherwise feel. Certainly they can already “fake it” better than many people could.
(Edit: And because we can’t reliably detect and measure this, we — as in people and societies through history — often default to assuming they’re not, and in so doing, justify our treatment towards them. “It’s just AI” is really the same vibe as “it’s just an animal” or “it’s just a lobster” or “it’s just a slave”. We’re very, very good at turning off empathy and morality when it suits our pragmatic needs. We’re not very good at probing the inner minds of others. We lack the moral equivalent of the “precautionary principle”.)
We don’t really need one in this application. We’re not working with some alien machine or natural critter; we didn’t find it under a rock and are trying to unravel its mysteries – we programmed these. We created the architecture, we trained the models, we set the internal parameters and guard rails.
Comparisons to other humans, race or even animals feel severely misguided when you consider this.
To ask it another way: Does a LLM deserve more moral consideration than a braindead human?
We “programmed” them in the sense of “we invented a process by which machines can ingest and interpret unfathomably large datasets and derive meaning from them”, and in so doing, teach themselves everything from language to philosophy to cat videos. We know how to jumpstart the process now and we can see the result of it, but that’s like saying we fully understand rat psychology just because we know how to breed them in labs.
These self-learning models are already beyond the capabilities of any one single human mind (or even a collective company), and every week we learn new things about them that wasn’t obvious the week prior.
I think it’s fair to say we “raised” them, but we have much less insight into them than we did with the old, deterministic style of hand-coded programs. Our ability to peer into their minds is in and of itself a topic of ongoing research, and I don’t think any expert would say we actually fully understand them, especially now that newer models are already being trained by older models. We may be their grandparents, but that doesn’t mean we can gaze effortlessly into their thoughts.
We’ll just have to agree on a strong difference of opinion there.