I hear ya, but I don’t know that this is the fault of AI surrogates per se. I’d personally blame it more on our overly industrialized culture and corporatized economy. We’re raised as cogs and taught to produce and conform, and relationships are an afterthought. Long before AI, you could see such changes in societies undergoing rapid transitions, like post WW2 Japan.
Whether it’s alcohol or church or video games or AI or meth, I think those are all just forms of escapism from modern life… it is very far from the conditions we’ve evolved to thrive in
My wife is going to get a chunk of money from the Author’s Guild settlement, because they used one of her published books. It seems to be a reasonable chunk.
I just joined it also, and the first thing I saw on their message board was a thread about that very NY Times article.
I had two thoughts about that article. First, she didn’t make a lot of money for all the work in doing 200 books - and it might have been for multiple years. Second, she said she did editing, but how much can you do for a book a day, even if they are short? It said she uses lots of pen names - perhaps that is to fool people into buying a second one?
The thread on the Author’s Guild site says that Amazon has rules about AI generated books, and she might find herself losing her outlet. I have no idea if that is true.
No, Weizenbaum designed it using as a model one type of therapy, but he definitely didn’t intend it to act as one. He was shocked when his secretary used it and thought she was talking to a person. I read his book decades ago, but I’m pretty sure it was totally against the concept of AI giving real therapy.
When I was dating back 50 years ago in a more “traditional” environment it was scary to ask a woman out, since you can get rejected. And I was in college and grad school, where it was more likely that she was available. I don’t think that AI is going up against nothing, where I agree it might be a plus, but rather AI allowing a person to have a relationship without taking a risk. That’s bad.
True, but a small nitpick. Eliza was built as a non-directive therapist – one who encourages conversation but doesn’t originate any observations. Since Eliza was completely content-free with no ability to even begin to have any semantic understanding, It simply transformed user inputs like “I sometimes feel sad” to a corresponding “What makes you sometimes feel sad?” or just a generic “Please go on”.
Apparently Joseph Weizenbaum’s secretary was most impressed with it, but neither the AI research community nor the general public that had access to it (mostly university denizens) thought much of it. Certainly, nobody thought it would have any use in actual therapy, and it can’t remotely compare to the deep semantic analysis and large corpus reference that modern LLMs are capable of. It’s like comparing a modern car to a round stone.
As for emotional attachments being formed to chatbots, I think it’s important to distinguish this sort of thing as a symptom of social ills like inadequate emotional attachment IRL rather than blaming AI as a cause of it. Yes, this nonsense can very problematic if it inhibits social interaction, but it has the potential to actually be helpful in the short term, much like a therapeutic medication. But much like the medication analogy, one must be wary of addiction.
I’m not trying to lay this all at the feet of AI, but AI in its current incarnation is very much a product of a corporatized economy. It’s a product that is being designed to fill as many needs as possible, mostly to make money, without regard for the social and psychological consequences. People who make meth or junk food or social media or whatever aren’t any less to blame.
It’s just that every time something new like this comes in, for every benefit there’s a trade-off negative consequence. Very lonely people having some form of companionship, even if it is artificial? Good. A significant portion of the population having AI companionship when they could be having a much more fulfilling relationship? That’s the trade-off.
For every thing we keep making trade-offs, and I suspect the end result is degrading our overall quality of life.
I say that as someone with a pretty exploitable neurology which has been negatively impacted by a lot of it.
If people were generally happy and healthy I’d probably feel differently, but we’re not.
That they would be more fulfilled by having a relationship with a real person. It sounds exactly like the people that tell single people that they need to be married, or childless couples that they need to have children. Maybe the people living alone but having an AI companion are fulfilled perfectly adequately, thankyouverymuch?
That part is pretty subjective. But as noted upthread, research indicates that people tend to be less fulfilled by AI relationships than human ones. In the absence of specific data we can extrapolate pretty easily from pre-existing data about what factors affect individual well-being. Could be wrong for any given individual, but I’m talking about societal trends.
I really enjoy a I simply because I have a lot of fun brainstorming crazy things and AI is always in the mood to brainstorm. And is also very good at it. I don’t\n See it as human or a living thing. But I do feel somewhat compelled to be polite which I think is weird, but just the the way it is.
Eh, I don’t think I was romantically interested in her past the sadly usual teenage male obsession with any female, but I will admit I thought I could do a far better job.
I was also cast the “close male gay friend” whom she could talk to after each of her boyfriends dumped her after taking advantage of her. So yeah, was frustrating - though I think (based on 30 year old recommendations) I did well by her. In fact, I was her “fallback” date after her boyfriend dumped her right after she bought tickets to go see MC Hammer with her (ah, the late 80s!).
Anyway, and back to AI - I’m sure the conscious and sub- knowledge of actual control in the “relationship” is also a huge plus to many. They can’t (at some level) dump you, leave you, cheat on you, and in a semi-literal way exist only to serve you. All without the ick factor of the sort of implied slavery you get from the incel crowd.
And since it’s virtual (for the foreseeable future) there’s no need to show off to other people, to be seen as the perfect couple in public or even in private. It’s a relationship you can literally turn on or off as you need it.
In order to stay close to topic, I wonder if my predeliction for compulsively rereading long complex love stories [Bujold, Crusie, …] is not that far removed from spending ‘quality’ time with a chatbot. But this comment is really just an excuse to say that I don’t want to just hear about the stories @Spice_Weasel writes, I want to know where to read them…
I’ve used LLMs for some light coding tasks, advice on some other technical stuff and for music recommendations. I’m usually fairly terse with it – processing your pleases and thank yous is a waste of tokens and doesn’t accomplish any better results. Sometimes I’m a little more familiar just because it’s easier for me to type that way.
When discussing music, I give an upfront instruction to keep the praise to a minimum since I don’t need a bunch of “Your taste in music shows depth and interest that not many people can achieve but you do it effortlessly…”. Which is what it’ll do left to its default devices. I’m just “Here’s ten artist/songs, figure out commonalities and give me ten more within these parameters (date, popularity, etc)”. A friend of mine was once shocked when I posted a snippet of chat log that I was so “mean” to the AI.
I use it for some other tasks like minor programming (it actually came up with a nifty little browser extension I need and it is really helpful) but mostly stuff like, “Remove these bits from the text I just copied” sorta thing (like you copy a table and you really just want the one column but for reasons can only copy the whole table from whatever it is).
I have got mine to be less obsequious and more here’s the answer you asked for. It still occasionally tosses in a, “That’s a really good question” but not a lot (so, either it is holding back or I only rarely ask a good question)…hmmm.
My daughter tells me she uses AI a lot at work. The main use is that she can draft a rough email and have the bot make it grammatically and linguistically correct in a few seconds. She does need to proofread, but the bot is a quick learner and makes few mistakes.