An AI lover? Is this real?

“Not okay” from a mental health and society perspective IMHO. I think maybe they should question why they seem unable to form relationships with actual human beings. Therapy might help. These people are already effectively “shunned”.

I like to be alone to pursue my interests and hobbies as much as the next person. But ultimately, I think people need to be with and interact with other people, for good or ill, where there are actual social consequences for acting like a stupid jerk.

I just can’t envision a positive outcome for a society where many, if not most people live an isolated existence where all their interactions from ordering in a restaurant to sex is with an AI that allows them to indulge in every narcissistic, even psychopathic whim.

To quote Jim’s Dad from American Pie,“It’s like banging a tennis ball against a brick wall, which can be fun. It can be fun, but it’s not a game. What you want is a partner to return the ball.”

Even in a low-res static preview image, that looks fake. If that’s the best they can do, they have a lot of work ahead of them.

The vid is better than the still. And not actually creepers creeping out. But yeah, it’s not even remotely alive. But it’s a far cry from 10 years ago.

Distinctly in the ‘uncanny valley’, I’d be inclined to say?

The motion still doesn’t look right, though.

You may think it’s a step forward, but just wait until AI discovers the potential for romantic scamming.

Not directly related to the topic, but I just asked ChatGPT a question about my dishwasher, and it was very disconcerting that (a) it knew where I lived, at least, the general vicinity, and so speculated that I probably had hard water, and (b) made reference to previous conversations about my general attitude towards maintenance. It was pretty spooky, and certainly conveyed the aura of a sentient being that knew me. I can easily see how some people can get sucked into a “relationship” with the thing. Personally, I just use it for information – anything from cosmic topics to hints about cooking and kitchen appliances.

Other than the boxes that now pop up at the top of a Google or Bing search result, I don’t seek AI interaction at all. So I’m pretty much a noob at this other than what I’ve read about here.

In the last 3 days I’ve had two interactions with what’s obviously AI-powered telephone customer service agents. One at a small shipping company akin to FedEx, but comparatively tiny. Who’d failed a delivery and wanted my help to ensure they had the right address and could find the place. The other was a doctors’ office I deal with, where I needed to reschedule an appointment ~6 months from now. They have 4 locations and about 5-8 docs, so not a large practice.

It was a revolutionary experience. The damned thing “understood” me. Complicated sentences with a change of course mid-stream? No problem. The delivery company one correctly paraphrased and read back a long description of how to find the right building vs the wrong building. The Dr. office dealt with me asking for one thing then changing my mind midstream. Twice.

It was an utterly different experience than the older flow-chart menu approach “you can say things like …, …, …” where it’s real easy to buffalo the computer, and in fact is often the best way to get a human on the line.

Color me very impressed. And except for a slightly flat affect, with little range, a very realistic voice too. Despite some hard to pronounce words in there. I had totally expected a human to answer both calls. I finished both calls completely satisfied that the bot had done everything a human would have, and with equal ease for me. And no language barriers talking to “Peggy” in Bangalore.

The future just came crashing into my living room.


All of which is to say that with a slight adjustment in what topics those bots knew to talk about, I have no doubt she’d have been good as a friend, a lover, or a phone sex operator.

For what it’s worth, this is an option you can turn off if you don’t like it: https://help.openai.com/en/articles/8590148-memory-faq

I turn it off not because I find it creepy, but just because I don’t want the AI muddying new conversations with our past ones. But that’s because I use it as a Q&A engine, not a virtual friend (nothing wrong with that; it’s just not what I want to use it for).

It also lets you specify many types of personalization… I asked my copy to be cold and robot-like on purpose:

Another user might well set it to be friendly, warm, enthusiastic, (and probably soon) horny.

Yeah, they don’t get enough credit for this part of it (their expertise in natural language processing, as opposed to the recall & hallucination of factoids). They’re really, really good at understanding and mimicking human languages and translating between them.

In the 90s and 2000s, that alone would’ve been a revolution. But I guess that for our generation of LLMs, language ability was just a minor side effect of their training, and the big companies all decided that generative uses (as opposed to language parsing & translation) was where the real money was at.

I would love to see more “mundane” LLMs that do simple, everyday things like correctly filter out all your spam, summarize headlines across news sources, alert you of nearby events that you’d be interested in, etc. But that’s not where the big bucks are.

It’s interesting that when Asimov was writing his early robot stories, he assumed that language would be the difficult problem: his first robots did not speak.

The tacit assumption seemed to be that navigating in the physical word, recognizing and manipulating objects etc would be much easier: after all, even dogs and cats can do that….

The advent of LLMs appears to have turned this on its head.

Maybe. The advent of a WWW with jillions of examples of human writing free (well only kinda, but they did it anyway) for the taking certainly altered the trajectory.

Moving through the world takes building a machine that moves. Trawling through the mountain of words out there is well suited to boxes bolted into racks bolted to the floor. Yes, you can do some training about the physical world by having your AI watch videos. But not to nearly the same degree.

My bottom line …
Language might be easier than thought, or this might just be an example of us finding our keys under the streetlight where the illumination is better.

The plausibility of much LLM composition does give one pause, though.

How much of human knowledge is really just encoded in language?

On the other hand, when you suddenly run up against one of those hallucinations or non-sequiters, you wonder: how much knowledge (or ‘intelligence’) isn’t?

There are certainly a decent number of humans who’re (in)famous in their circle of friends for spewing BS because they seem genuinely unable to distinguish between what they know, what they surmise, and what they flat out randomly guess.

But their story is always delivered in a plausible fashion.

I will not repeat the stories about my brother-in-law… :slight_smile:

But yes, LLM output often seems as coherent as a lot of human conversation, informally at least…

If they’re closely mimicking us both in what we do well and what we do badly, that suggests to me that they are a lot more similar to us at a conceptual level than most meat-beings are ready to concede. Yet.

LLMs resemble us at least to the extent that language models the world more deeply than most of us realize. A strong grasp of language results in an equally strong grasp of how humans think and see the world.

If true, that has rather dire implications.

That’s true, but the difference is we seem to know those people are full of shit. It’s harder to tell when AI is bullshitting us because we don’t have access to as much information as it does. Those are the unknown unknowns.

Right, the Sapir/Whorf idea. Can conceptual thought even occur without language?

It is almost impossible to think out of the box from inside it.

On the other hand, let’s consider Larry Niven’s proposition. Imagine a mind which thinks just as well as you… but differently…?