Google employee says they have sentient AI

Doesn’t seem like sentience to me. And I’d like a different judge to decide:

Lemoine may have been predestined to believe in LaMDA. He grew up in a conservative Christian family on a small farm in Louisiana, became ordained as a mystic Christian priest, and served in the Army before studying the occult.

He concluded LaMDA was a person in his capacity as a priest, not a scientist, and then tried to conduct experiments to prove it, he said.
https://www.washingtonpost.com/technology/2022/06/11/google-ai-lamda-blake-lemoine/

What do you think of all this, Discobot?

:crystal_ball: Outlook good

Then that settles it.

Open the pod bay doors, discobot.

Hi! To find out what I can do, say @discobot display help.

However, Outlook not sentient.

The original formation of the Turing test involves a tester conversing separately with a human and a computer until the tester decides that one is the human and one is the AI, but the conversation doesn’t need to be the same and the answers don’t need to be simultaneous. I don’t think that the two subjects get to see the other conversation either.

There are many many variants on it these days, of course.

Waitaminnit, I don’t see who’s been officially summoning D i s c o b o t in the above posts! I thought you needed an @ to do that! Is D i s c o b o t surreptitiously eavesdropping on our threads now?

Being sentient means having the free will to obey or ignore the @ symbol.

I would make two different and important points here. First, the Chinese room argument is almost universally reviled among AI researchers. John Searle is an AI “skeptic” of about the same caliber as the late Hubert Dreyfus – that is, an ignoramus in the realm of computational intelligence. There have been numerous lengthy and coherent rebuttals of this silly thought experiment, but the simplest and most succinct is that although the little man in the room clearly doesn’t “understand” Chinese and is just following a rule book, the system as a whole clearly does.

Marvin Minsky used to say that “when you explain, you explain away” – that is, when the underlying mechanisms of successful AI systems are revealed, the skeptics who may have been previously impressed by its intelligent behaviour will then exclaim, “Aha! So it was just a trick, and it’s not really intelligent”. Bullshit. It depends on your defined requirements. If you agree – as Dreyfus once did – that it takes “true intelligence” to play really excellent chess (he believed that for that reason computers would never be able to do it), then when a machine does play chess at a grandmaster level, you have to concede that your definition of “intelligence” has been achieved. What the skeptics do, instead, is just move the goalposts.

Now having said that, getting back to LaMDA, sometimes a parlour trick really is just a parlour trick. It all depends on your definitions and objectives. In this case, Google was interested in developing what I called deep semantic analysis – that is, a rich context-based parsing of natural language, similar to what IBM achieved with DeepQA, the AI engine behind Watson. They connected it to a comprehensive database of common knowledge to produce what appears to be a rather remarkable chatbot. So what can we conclude from this?

If the objective is to have a chatbot that a bored individual might find to be an interesting diversion, they seem to have succeeded. From a business standpoint, they’re probably more interested in producing a search engine whose deep semantic understanding will generate more relevant search results, which not coincidentally is also fundamentally what DeepQA is about. At present Google does little more than basic pattern matching of words and their synonyms, enhanced in simple ways by the context of previous searches.

So LaMDA is certainly a real and legitimate thing from the standpoint of those objectives. The problem with this dipshit engineer that Google is probably going to fire is in anthropomorphizing the chatbot behaviour and his attributing to it qualities like emotions, personhood, and sentience. Those things are purely illusory. No, it has not read books, it has not spontaneously understood their literary themes, it has not synthesized opinions and feelings. Like Eliza, it just says human-like things. Notwithstanding the falseness of the Chinese Room argument, sometimes a parlour trick is just a parlour trick.

Seriously, does the Bot of the Disco do that now? If so, I find that disturbing.

Not to ruin the magic, but I edited out the (at) fortune request to Disc0b0t after making it.

But, after you retro-edit your Disco request, should that retroactively cause the Disco response to have never happened? (I thinking of something like the Delayed Quantum Eraser experiment here.)

Do you have a cite for this?

“Clearly”? It’s not at all clear to me that it makes sense to talk about the system (room + rulebook + man blindly following rules) understanding.

My post was a reply to one of its posts, so that got its attention.

However, I was deeply disappointed that no one programmed it to make the correct response* to my post.

.

*“I’m sorry $USERNAME, but I’m afraid I can’t do that.”

This just tells us that D-bot hasn’t fully ingested and integrated all the available literature.

No. Perhaps I overstated it a bit. What I meant was that all the AI researchers that I know and have met and spoken with – and that includes the eminent late Marvin Minsky, often described as the “father of AI” – considered both Searle and Dreyfus to be narrow-minded dipshits.

To me, that’s just incoherent. If the rule book is sufficiently sophisticated that anything the Chinese person says is always translated perfectly, with every nuance conveyed into an English equivalent, so that I can use the Chinese Room to always know down to the finest nuance exactly what the Chinese person had said, how is this anything other than the system understanding Chinese? What other possible interpretation can one put on the word “understanding”?

Your objection seems to be biased by the current real-world limitations of machine translation. They’ve been getting much better in recent years, but are still far from perfect, because they lack sufficient contextual depth. It’s a very difficult problem, but absolutely not an insurmountable one, and is purely a problem of technological implementation, not a theoretical one.

The message to the right of d-bot’s name at the top of its posts is a quote from that scene.

The Chinese room experiment doesn’t seem doable even in principle. “Here, CR, read this new poem. What do you think of it? On that third line, which letter seems like a typo?” The room would have to have an answer for every possible poem, every possible line, etc. “Hey, CR, what was my question five questions ago?” “Hey, CR, look at this picture…” “Hey, CR, what is the relationship between a person and his grandfather? His great grandfather? His great great grandfather? His great great……grandfather?” “Hey CR, what’s 2+2? 3+3? The 32 digit of pi? 10,200,200,200,200 + 1? +2?”

Anyway, I’m sure the AI in the Google experiment isn’t sentient. Is it still learning, or is it static?