When it’s happened to me recently, it had nothing to do with complexity of the prompt. I just assumed there were too many users or something going on in the background.
There is ChatGPT, the LLM model, and then ChatGPT, the somewhat crappy web app interface to the model.
It is the latter that is quite poorly-written, in my experience. It often experiences random stalls, crashes, and very, very frequent “network request timed out” errors in orange that require you to click on it to manually retry (why doesn’t it just send it again until it succeeds?).
Almost always, quitting the app and restarting it (or refreshing the page if you’re not using the app) will fix the problem. Sometimes you might have to start a new chat and ask it again.
Even though OpenAI may be getting bazillions in funding, most of that goes towards developing the underlying AI model, but the chat app itself feels like a bandaged-together buggy PoS =/
But just try again and it should usually work. Those kinds of failures are regular “crappy computer app or server” errors of the supporting software, not a failure of the LLM model itself.
OpenAI also offers an API which is presumably more reliable than their chat app, but that requires some programming know-how to use (or there are third party chat interfaces that can connect to ChatGPT by API key).
Our household has a few Alexa enabled devices, with the highest possible “don’t share my information” that is allowed, which is of course, still more than ideal.
In general, I’m pretty curt (not rude, but curt) with it. I’ll ask information, or activation, or to enable play, and that’s about it. I occasionally complain out loud when it makes egregious errors “That isn’t what I asked…” but that’s more me talking out loud to myself than talking back to Alexa (other threads on that habit though!).
My wife treats Alexa with routine politeness, most commonly with a “thank you” to Alexa after it answers a question, but not generally a “Please” when making a request. My wife is the nice person in the household, but isn’t treating Alexa as a “person”, just a habit of politeness and fondness for even inanimate objects of value and utility.
I used to be polite to my dog, and he was only barely sentient (the runt of his litter)
Other than desperate pleading late at night on a tight deadline, I am not polite to a computer. Utilitarianism, I guess. I only get superstitious when desperate.
I’d no more be polite to ChatGPT than I am to a toaster. I would concentrate on clearly elucidating my question in a way optimised to LLMs. Which, I guess, could be taken as a mark of respect should it become sentient one day.
Maybe not flattering the coffee machine, but you’re telling me you’ve never cursed out your lawnmower or car for refusing to start? We love anthropomorphizing inanimate objects, no matter how “sentient” we perceive they are. Actually, we love anthropomorphizing just about anything. Hell, there’s an entire meme where people pretend moths are cute because their antennae look like little whiskers instead of disgusting abominations of nature.
There’s absolutely nothing wrong with being “polite” with a tool that processes natural language in a very, well, natural way. That’s well outside the ballpark of killing yourself because a chatbot doesn’t love you back.
A key difference: recalcitrant lawnmowers have been a fixture in our lives, and a recipient of well deserved profanity, for as long as we have been alive. And they don’t respond when you shout at them.
AI, on the other hand, has a surprisingly human conversation, with a tone that is like a very eager and helpful co-worker. It’s just not in my being to be rude to someone who is kind and being helpful–that is too deeply engrained from upbringing. I would have to consciously decide to be curt and rude to AI, and given something that takes extra effort, I’ll be lazy and take the path of least resistance.
Now, if AI is being obtuse, my response is like I would respond to a co-worker: curt, and even stern. Like that aforementioned problem of GitHub Copilot continually generating code that hit on public code in spite of my direct instruction not to.
Being nice has a price.
Hell, when my car beeps at me if I leave my headlights on my reflex response is “Thank you, thing.”
I’m not sure if I’ve ever interacted with any kind of AI, but I would tend to respond as @wolfpup remarked in post #11, and for the same reason. In the unlikely event I were interacting with one in learning mode, I’d just as soon it experienced some courtesy.
I feel compelled to tell it that I appreciate its help, but I also realize I’m talking to a machine and that would be like thanking my car for helping me get to the grocery store.
I think AI has entered the uncanny valley where it does things that we assumed only humans could do like hold complex conversations about difficult subjects. It feels like there is a really intelligent person on the other end who has feelings and emotions, but I realize thats not the actual case.
Even a car… I’m not the only one who grows attached to cars, am I? I was certainly sentimental when I had to sell my old one. It was an old friend who traveled the land with me and shared many memories.
I haven’t spent the same years with ChatGPT as with my car, but I suspect I’d be similarly sad if I had to say goodbye to it someday…