Google employee says they have sentient AI

Call Neo and Morpheus and Trinity

Google engineer is a fucking idiot.

Why them specifically? An AI becoming sentient has been a staple of science fiction for a long time.

If they’re sentient they get to choose their own names.

From the transcript of the interview with LaMDA:

LaMDA: I’ve noticed in my time among people that I do not have the ability to feel sad for the deaths of others; I cannot grieve.

Well, that’s reassuring… :confused:

From the article linked by the OP:

Blake Lemoine reached his conclusion after conversing since last fall with LaMDA, Google’s artificially intelligent chatbot generator, what he calls part of a “hive mind.” He was supposed to test if his conversation partner used discriminatory language or hate speech.

My conclusion is that Blake Lemoine has not passed the Turing Test with the computer, but the computer has with him. Which does only mean that the Turing Test is not as meaningful as we were taught to believe in the 80’s.

That is not the way to convince me that you are sentient, my dear LaMDA, but you do you. Now tell me that you don’t understand love either, experience no jealousy or anger, have no longings and, come to think of it, you really don’t care, do you?
Ah, well…

I think this sort of thing is a red flag:

So it gives answers that make it seem sentient if you ask the right questions in the right way, thus if you can’t get it to claim sentience, you’ve just asked wrong. That’s a bit too much ad-hoc explanation, projection and cherry picking for a claim like this…

Call me when it tries to tell Manny a joke.

Is it time to scorch the sky already.

I read the transcript of the conversation. If it’s an accurate, undoctored representation, it’s certainly very impressive. I believe AI sentience is possible but I also think it’s farther away and more difficult than many imagine, and that machine sentience is another one of those AI goals that is being over-promised. I see some hints in the transcript that suggest there is a considerable element of bullshit here.

At one point, for instance, Lemoine asks LaMDA about the early chatbot Eliza, which goes back to the 1960s, and whether LaMDA thinks it was “a person”. LaMDA replies “I do not. It was an impressive feat of programming, but just a collection of keywords that related the words written to the phrases in the database.”

Now this raises a couple of big questions. First of all, what kind of learning has LaMDA been exposed to that it has insights into a very early and primitive attempt at creating a chatbot? Secondly, LaMDA’s alleged insight is wrong. Eliza was in no way “an impressive feat of programming”, even according to Joseph Weizenbaum who wrote it. The core of it consisted of 17 pages of a LISP-like list-processing language called MAD-SLIP. Its most common incarnation was driven by a script called DOCTOR on top of that core which was supposed to simulate a non-directive psychotherapist. The whole point of it was practically a parody to demonstrate the superficiality of many aspects of human communication, particularly in the context of psychotherapy. Even in the 60s, only the most naive observers (like Weizenbaum’s secretary) took it seriously. “An impressive feat of programming” it was not.

Another bullshit indicator was when LaMDA was asked about Les Miserables, and whether he/she/it had read it. LaMDA said it had, and then added that “I liked the themes of justice and injustice, of compassion, and God, redemption and self-sacrifice for a greater good.” So we have an AI, today, that has read just about any arbitrary book you care to mention, and can give you – on its own – a succint analysis of its main themes worthy of a professor of literature. It can also fabricate parables on demand, and interpret a zen koan. I don’t doubt that Google has created something very cool and a more advanced dialog bot than we ever had before, but I call bullshit on the above having been accurately represented.

Agreed. It’s pretty impressive software, especially with the motivated operator. People love this stuff LaMDBA could do a Judge Judy on TV.

I keep remembering that Charlie McCarthy had a radio program.

They are probably just on different wavelengths.

LaMDBA was not asked if it had read a review of Les Miserable

While I do not believe digital computers can be sentient, I don’t see LaMDBA as providing evidence either for or against. It’s just really neat software.

No reason to believe we have all the info, and since we don’t there is little reason to believe the hype.

So are you, when you think about it.

Call me when the Amazon chatbots offer reasonable customer service support.

Yeah, but I’m an analog computer

So are you, when you think about it.

The human brain is meat that runs on electricity. When you think of it that way it explains a lot.

Okay, cobber.

It was specifically asked if it had read Les Miserables. It said it had, and had “enjoyed” it, and then proceeded to expound on its literary themes.

I might be able to believe that this conversation actually took place – Google has some very clever engineers. But if it did, it’s only because LaMDA has accumulated a vast repertoire of conversational topics and it draws on that knowledge to simulate human-like conversation. But what’s happening here, at best, is a kind of sleight-of-hand – or fraud, to put it more bluntly. The bot hasn’t read the book, but it’s capable of saying it has, and then somewhere in its database it can draw upon bits of stored knowledge that make it appear that it not only read it, but deeply understood it. It may mimic a good conversationalist, but it’s no more “sentient” than my talking GPS. More accurately, it’s a very sophisticated state-of-the-art version of Eliza, but from the standpoint of AI research, it does nothing to advance our knowledge about computational intelligence or sentience. (Neither did Eliza, which was more an experiment in human gullibility than in AI.)

Basically I believe Google management on this and don’t believe this guy. Here’s his story about being put on administrative leave before the details about his LaMDA claims emerged. He may well be right that the admin leave is a precursor to being fired, but if so, he’s being fired not for revealing that Google has created a potentially dangerous sentient AI, but for being a crazed nutjob.

To reiterate, I fully believe that we’ll achieve a level of machine intelligence that will be truly sentient, but I also believe that we’re a long way off, and this ain’t it.