An AI lover? Is this real?

Some people seem to seek out or thrive on conflict, though. If the market is there, I guess it will be exploited, unfortunately?

Heh, that’d be great.

I hope that some of these interactions (with their users’ permissions) can become published for public viewing. It’d be a cool, organic, trickle-up amateur romance novel / erotica market.

Oh, that reminds me:

The New Fabio Is Claude - NY Times (edit: replaced with gift link)

Even the traditional romance novel industry is taking up AI.

(And I bet the writing quality’s gone up because of it, heh)

I’m not a fan of AI, but I’m curious to see where this all goes. Yes, people have formed relationships with machines in the past, but perhaps we’re going to be seeing it on a much larger scale. The psychological and social ramifications are yet to be seen.

People already have been shying away from sex, dating, and marriage due to the messy realities of human relationships. One of the prevailing theories for the decline of sex is the easy access to porn. This is like that, but perhaps even harder to resist because of the companion component. It’s not just sexual gratification, but emotional, too.

Who needs people?
Do we need people?

Research tends to say yes, but you the best of my knowledge that research does not include consideration of AI relationships.

I don’t have a cite handy, but the last time I looked (casually) into that as a research topic, I seem to remember that the general finding was it’s better than nothing: AIs couldn’t (yet) replace a fulfilling human relationship, but for the many thousands or millions of people who cannot find or sustain a fulfilling human relationship (for whatever reason), having AI relationships was at least better than having nothing (same for mental health therapy, apparently).

I mean, even among human companions, the “quality” can vary a lot between individuals. Right now the better AIs are already better than the worst human partners. Maybe even better than many of the average ones too. It won’t be long before they’re better than all but the most elite of human lovers. Maybe at some point the AIs will be paying us for the privilege of having a flesh-and-blood escort…

Just for reference, how many here have used LLMs for this sort of clerical or technical work?

If so, how do you regard or address them? They do present a very strong sense of intentionality sometimes. They can almost certainly pass the Turing Test most of the time, though once in a while there will be a complete disconnect or loss of context that reminds you: there isn’t really a ‘mind’ behind the responses…

(EDIT: wait, nevermind, sorry, I didn’t realize you were the OP… erased previous message to xtenkfarpl)

That was a fun read. AI threatens to exacerbate an issue that already exists: authors who publish hastily written, cheap formulaic novels (for an audience that really wants them) and people like me, who want to write, and read, emotionally complex and meaningful love stories. People like me don’t really have a prayer of competing with authors who self-publish ten books a year, but the effect of AI disruption is not a particularly unique development in the industry. There is a place for more thoughtful stories, but it’s a lot more niche. You really have to decide if you want to make a lot of money, or write good books.

The most concerning thing about this is the thought of my work being used to train AI without my consent. I’m not yet published but I have a pretty unique voice that I came by over the course of years of hard work, and the thought of that voice being stolen and mass-produced really pisses me off.

(AI at this point appears incapable of thoughtful and complex literature, so I think there’s yet a market for what I write.)

I use them every day, multiple times a day, for work. The “context window” limits are largely a non-issue now for everyday work (though not for truly long-form content, like a novel-length book).

For everyday things like code snippets, their context windows are already pretty large by default (especially Gemini’s). Even the smaller ones, like ChatGPT’s, can be made significantly more useful with personalization prompts (in the options, you can give it a prompt that it applies to all your queries, like a preferred coding/writing style, length and tone of reply, etc.)

And specifically for coding, there has been an explosion of development in agentic coding IDEs and CLIs (Claude Code, Codex, Cursor, Windsurf, etc.) where they can look at an entire codebase and semi-autonomously make changes, add features, fix bugs, etc. They handle all the chunking and contextualization on their own, calling iterations of sub-agents as necessary. Even if you don’t use them agentically, those same systems are also pretty good at being able to ingest your entire codebase for the purposes of being able to answer questions or write documentation or an API against it, etc.

Beyond that, more generally, we have other threads like Is AI overhyped? - #258 by Voyager or the Topics tagged ai full of similar discussions.

Good point. Let’s keep it on topic.

For example, have any of you ever felt that an AI validated or encouraged you in a way that a human friend or collaborator might not? If so, how do you feel about it?

My theory is that if sex is declining, it’s probably part of a broader trend of social isolation in society.

Growing up, the concept of having a robot sidekick was pretty common in the shows and movies we watched:
Michael Knight and KITT
Luke Skywalker and R2-D2
Fry and Bender
Buck Rogers and Twiki
That robot on Lost in Space
Johnny 5
The Iron Giant
John Connor and his Terminator in the second film

The IRL problem with any AI companion (whether sexual or not) is that a robot doesn’t have agency. I’m not sure if it’s great for society where people expect a companion to cater to their every need.

Sorry, I posted that response before I realized you were the OP (I thought Spice_Weasel was, and didn’t want to hijack their thread). Of course you are welcome to steer your own topic however you like :slight_smile:

Thanks. Yes, there are other threads about the technical aspects of LLMs etc.

I was more concerned with the social and human implications…

Exactly. I think a lot of fields threatened by AI (like mine, computer software) are going through a similar existential crisis, but it’s also not a new one: the age-old battle between quality and quantity.

The signal-to-noise ratio has gone drastically down in recent years, and I don’t think anyone’s managed to solve that yet.

Personally I hope we go back to a curation model (like the old Yahoo, or even NYT lists) where people manually, slowly, painfully dig through the trash to find the few gems and recommend them that way. Most of whatever’s on top of Google, Amazon, etc. these days is all algorithmically recommended and astroturfed spamvertisements, not actual indicators of quality anymore.

Certainly there would be a market for the higher-quality stuff… if any buyers could even find them to begin with.

I mean, one of the very first chatbots, ELIZA, was designed as a virtual therapist. It was ludicrously primitive compared to ChatGPT, but even it still worked to a degree.

(Aside from using it for coding)

Yeah, it had a tendency to do that in every damned conversation, whether I was asking about a drug interaction or cat behavior or interpersonal advice. I hated it and had to turn it off (in the preferences, I explicitly told it to stop flattering me, to respond a neutral tone with an objective view of things, etc.)

One version was so bad at this it had to get recalled: Update that made ChatGPT 'dangerously' sycophantic pulled

How would we ever know that, though (in a Turing Test sense)? Some of them at least already seem to exhibit more agency than some humans.

There’s the “does free will exist” angle part of that question, sure, but even just at a more down-to-earth “how creative can this thing be” level, many of them are already far, far more creative (or able to steal/borrow from a larger repertoire of training, at least) than most regular people could.

Their level of sycophancy and agreeableness is just a function of their particular training and the manually-added safety layers on top of it; with a different prompt or training set, they can and do frequently disagree or refuse to engage.

Sure, it’s just a probabilistic statistical model, but in a lot of cases, that’s relatively indistinguishable from how many humans operate too. That’s not necessarily how our brains work internally, but the outcome is often indistinguishable.

As of right now, with the way most of them are set up, they only respond to prompts, and do nothing without a prompt. I think that’s what @msmith537 means by lack of agency. A human, meanwhile, will start a conversation with you as often as you do with them, and might start making dinner on their own, or look at you and make inferences based on your facial expression, or whatever.

That’s not an inherent limitation of this technology, though, just its typical implementation. There are already a bazillion autonomous or semi-autonomous AI spambots, code review bots, coding agents & sub-agents, NPCs in video games, security vulnerability checkers, web scrapers, content regurgitaters, marketers, propaganda agents, etc. They’re not sitting around waiting for a prompt in a chat window, they’re doing it 24/7 around the clock. Sure, they don’t necessarily have “agency” in the “free will” sense, but that’s not so different from the busybee office worker who has to do mundane tasks to pay their bills, or even the horny sex-starved loner who seeks out companionship because their genes demand it of them.

There are also already AIs used in rearing/training the next generation of itself, e.g. OpenAI says new Codex coding model helped build itself

They are building on their own capabilities with increasing levels of autonomy and agency. Their level of autonomy is a continuum more than a single bright line.

Who are you, the Space Pope?

That sounds about right, but the question then becomes: to what extent does having a “better than nothing” relationship with a chatbot decrease the likelihood that this person will go out and find more fulfilling relationships?

Most of us already settle for the easier things in life vs. the better things in life. Nutrition comes to mind. It’s easier to order convenience food than cook a meal. So a lot of us just don’t cook meals. Conservation of energy is human nature. But it doesn’t really help us in the long term.

I can’t help but envision a society where we all have “good enough” lives at the expense of really vibrant and fulfilling ones.

Yes, but what’s driving that social isolation? To a very large extent, technology. And then COVID killed what was left of our community spaces and this is what we’re left with.