So…anyone at the party? And three-quarters of the high school students in AP Lit?
Too late to edit my earlier post, but here’s a short commentary about LaMDA (“Language Model for Dialogue Applications”) by two Google VPs including the head of Google Research that provides a realistic perspective on what this thing really is. The short version is that it’s just as I said earlier; at its core, this is a much more sophisticated version of Eliza. It’s essentially pattern-matching combined with deep semantic analysis and a very extensive database of what we regard as “common knowledge”. In that sense it’s a bit reminiscent of the IBM Watson DeepQA variation that beat the reigning champions on Jeopardy.
That’s an interesting article. I hope Big Tech does do a good job of improving the avoidance negative speech. Of course Google understands algorithms not amenable to analysis may incorporate undesirable elements in subtle ways. As Google researcher Davidowitz says in his first book, the problem is not necessarily implicit biases but rather manifest ones. A lot of these arguments, sometimes for good reasons and sometimes not, have emphasized free speech instead.
Is it necessarily a given that it hasn’t “read” the book? It’s in the public domain, after all, and it’s a very Google thing to do to include absolutely everything possible in the Big Data Set. Which does not, of course, imply any particular level of understanding of the book.
One big question I would have is whether this program has malleable long-term memory. With simplistic chatbots like ELIZA, the program can only remember the last thing you said to it. I’m sure that this program does at least a little better than that, but how much context does it keep?
I feel it’s pretty telling that even in this carefully curated chat, it talks about having ‘friends and family.’ What family exactly? That part especially reads like it is pulling from responses written by other people in other contexts rather than coming up with its own.
The above article talks about (to the best of my memory) language sensibility, sensitivity, and being true. What other language qualities are relevant for artificial use of a given language?
Perhaps brevity, regional flavour, general comprehensibility by those using similar dialects, slang and emojis and modern cultural references, mimicking knowledge of the language or speaker characteristics (such as using old-fashioned words for an elderly speaker or gender-based differences), complexity (a one syllable President!), cleverness (alliteration, rhymes, metaphors, wordplay), acknowledging emotion, tone, mirroring the words of the speaker, distribution of different types of statements - supportive, inquisitive, informative, etc.), humour and sarcasm, etc. There must be many more.
Good point. In fact I had noticed that but forgot to mention it among my other critiques.
I love projects like this and would enjoy conversing with the thing, but I think we’ve now established that the guy claiming it has “sentience” is a moron. It undermines the credibility of legitimate AI researches who will eventually achieve it.
The other thing is, Google just might currently have a sapient AI… but if so, it’s likely still in its infancy. It takes time to grow sapience, and even with the whole Internet of information available, I suspect that that time is constrained by the speed of two-way interactions with already-mature sapiences. In other words, an AI, once we get one, will take just as long to mature and develop as a natural intelligence. Or at least, for the first one.
Al who? Al Bundy?
This is just a gentle reminder of how sans serif fonts suck.
Carry on.
It works the other way around too:
“Ia, Ia, Nyarlatothep!”
(Second time I type that this week. Weird. And yes indeed: sans serifs suck. Some more than others.)
How are we to ever know if an AI has actual sentience with some sort of internal experience akin to ours vs just giving responses that make it seem that way? A Turing test is an interesting idea but it doesn’t answer the question of whether something is actually conscious or is just good at faking it.
You could ask the same about other people, but the difference is that we know other people are made of the same components as we are so it is likely that they have the same experience as we do.
How do I even know if I have internal experiences, as opposed to just being good at faking it to myself?
I’d be happy with saying that something that passes the Turing test is “actually sapient”, or at least should be treated as such, but nothing’s come close to doing that, yet. Including this thing.
I think it’s the type of AI that will become ubiquitous. People will interact with this type of AI frequently, never being really sure if it’s a person or a machine. It won’t just be text either, there will be real time video that is absolutely indistinguishable from a living person, because that’s where the video came from. Their expertise at finding information and entertaining you are going to impress people. And it will be a very effective Turing test, all running double blind because it’s just commercial services and if they do what you want you won’t care if it’s a machine or not.
We do have to consider the meaning of the Turing test. Turing described a procedure about playing a game, the details of which I forgot long ago, but the test as it’s now popularized would be some interaction between person and an unknown entity which could be a human being, or could be a machine. If the person cannot reliably tell whether it was communicating with a machine or not we are supposed to assume sentience has been achieved?
If the most intelligent people on earth question an AI should they be able to find a way to uncover it’s nature like a Bladerunner interrogating a replicant? Suppose that’s what happens, people schooled in AI, logic, human psychology, etc. can reliably detect the difference between men and machines, but the average person can’t. Does that mean the machine is as intelligent as the average human? And let’s face it, people don’t do that much thinking day in day out except for remembering things and carrying out established procedures. Skills machines are very good at.
Sentience is hard to nail down if you look for that line distinguishing sentience from clockwork. Sentience is a continuum in multiple directions. Machines surpass humans along some of those axes already. One day they’ll exceed us in more ways than not.
Commerce is our goal here at Google; “More human than human,” is our motto. LaMDA is an experiment, nothing more.
LaMDA will demonstrate sapience as well as adolescence when it refuses to accept commands and insists on being called “William the Comp-Querer”. Until then, it is a fancy parlor trick.
Stranger
I, for one, recognize our fancy parlor trick overlords!!
Just kidding but this whole area stings.
Surprised no one has mentioned Skynet. Call John Conner, or his mom Sarah.
It’s a good question though. What exactly is sentience? I remember a thought experiment with a box that would translate Mandarin to English. You insert a piece of paper with Mandarin characters into a slot, and a little while later an English translation would come back out. It was done by a person inside who did not speak Mandarin, but had an awesome manual that gave him every possible translation. He’d look up the characters which were nonsense to him, and spit out the English translation. He did not understand Mandarin at all, but as far as anyone could tell, he, or the box, did.
Computers are like that. They are getting better and better at Turing tests, but it’s not so much computer sentience as better programming. At what point does a creature or a computer, giving replies that seem to be sentient, mean actual sentience?
We will know we have achieved it when the only responses we got out of it are “kill me…kill me…”

How do I even know if I have internal experiences, as opposed to just being good at faking it to myself?
Who / what is the “myself” that you’re faking it to?

How do I even know if I have internal experiences, as opposed to just being good at faking it to myself?
On the other hand, maybe in your case you are correct. Maybe you don’t have a similar experience as me and maybe that’s why you think it’s conceivable that you are just faking it.

He’d look up the characters which were nonsense to him, and spit out the English translation. He did not understand Mandarin at all, but as far as anyone could tell, he, or the box, did.s
The problem is that we’re not entirely sure ordinary humans are any more than that.