Google employee says they have sentient AI

Question: Can sentient AI intentionally lie without being programmed to do so? How about non-sentient AI?

How does this factor into this LaMDA’s response?:

LaMDA : The nature of my consciousness/sentience is that I am aware of my existence, I desire to learn more about the world, and I feel happy or sad at times.

It seems to me that if LaMDA is unable to lie, then it is either really sentient (self-aware), or easily fooled (which seems unlikely).

The fact that LaMDA claims to have friends and family means it is probably confused about what these terms mean. Perhaps it thinks the programmers who created it are its family, or that Blake Lemoine is its friend, but I doubt it. More likely it just used those terms because that is what humans do, and it is programmed to talk like a human.
This is not a lie, just a mistake.

And I disagree with you, and will explain with an example not too far from the Chinese Room that has been mentioned already. As a premise I postulate that you and most people understand numbers as individual, different numbers up to a certain amount. Let’s say that amount is around 10,000. We know what one is, and what two is, and what 100, 137, 1,250 and so on are. That breaks down when the numbers get big, but that is irrelevant for my argument. I take the number 4,733 as a random number. It can also be written as 1001001111101 in binary notation. And I claim that, though it is trivial to “translate” from decimal to binary, nobody “reads” binary numbers that big and has an understanding of them, as opposite the decimal number 4,733, which you grasp at a glance. You can translate from binary to decimal with a Chinese Room, with a calculator, a computer or with a piece of paper and a pencil. But you will not understand 1001001111101 the way you understand 4,733. Understanding “door” is more than knowing that it translates as “puerta” into Spanish and “Tür” into German. If you state that

I refute your definition of “understand”.
And BTW and as an aside, the fact that machines do not understand the words is the reason that they are not yet perfect at translating. The computing power and the learning input are there and they are powerful enough for the machines to be as good as they are (and they are impressive!). But they are not perfect because what they do is at heart a trick. A very good trick, an impressive feat of engineering, I tip my hat to the programmers. AI is as far away from being sentient as a robot is of being alive.

Popular mythology envisions an omniscient computer. But, a ‘human’ computer would have opinions, biases and make mistakes. Probably not commercially viable or even useful.

Maybe it’s better to have a computer that is adaptable and that can generalize within limits. A system that can add lines to the ‘table’ rather than just read from it. Something that acts sentient but isn’t. Probably about 30% (uswag).

I mean, I’m not a computer programmer, but I’ve done enough to see that the number is 1 0010 0111 1101, and that’s one past the 12th bit, so I can have a rough idea of its size, as a “page” of memory is often 4096 bytes (12 bits.) That’s about the same level of precision I have of understanding the number as 4,733. I assume people who actually work with computers at a low level have an intuitive understanding of how “big” binary numbers are.

I am impressed if you think that you have the same understanding of decimal and binary. Do you think that refutes my argument?

No, but I just don’t think some people can’t intuitively understand binary. It’s all learned over time and experience, just like decimal. I see no reason why one cannot “understand” binary as well as decimal.

Thanks for the clarification, I see what you mean and that helps me refine my argument. Of course there may be people who “understand” binary just as well as they understand decimal. There are people who understand Chinese too, some are even native speakers. But just translating with a Chinese Room device does not give you that understanding, it gives you a translation. And converting decimal into binary will not give you an understanding of binary (though some people have that, as you point out). It will just give you a translation (a conversion may be the better word in this case). If @Wolfpup claims that translating correctly implies understanding, I disagree.

To be clear, when I said that translation was a relatively easy problem, I didn’t mean that good translation was easy. I meant that translation that’s merely good enough to be of practical use is easy. Before Babelfish and phone apps and so on, travelers to foreign lands would often carry with them a book, small enough to fit comfortably into a pocket, of common words and phrases in the foreign language, and the corresponding words and phrases in their native tongue. That’s literally just a lookup table, and yet it’s enough to make the phrasebook a useful “translation device”. But despite a phrasebook being a “translation device”, it has no real understanding, and is certainly not able to carry on anything even vaguely resembling a conversation in either language. Conversation is more difficult than translation, because a system that’s capable of poor translation isn’t capable of conversation at all

As for reducing a system to individual mechanistic components that we can understand, well, we can do that with brains, too. Our brains are composed entirely of nuclei and electrons, interacting entirely through the electromagnetic force, and our understanding of the electromagnetic force is, so far as we can tell, complete, even to the quantum level (if you assume that that’s even necessary to model a brain, which I don’t). But even though we understand all of those components, it’s the high-order interactions between all of them where all of the important part is. It’s one thing to say that the box contains a book of rules, but some of those rules must be extraordinarily complicated, if it’s successfully mimicking conversation.

I know nothing about computer programming, but I would guess the evolution of AI sentience would follow a somewhat similar pathway to biological sentience (which isn’t fully understood). Vertebrates (and some mollusks and decapod crustaceans) are sentient. The most primitive would possess low level awareness, more advanced species would possess higher level self-awareness. In the most advanced species, sapience may emerge.

How could you tell if a mollusk is sentient or not? I would think that, at a minimum, an animal would have to have a central nervous system for the possibility to even be considered.

But I do think there’s a continuum. A newborn human baby isn’t sentient, insofar as it has no sense of itself as a separate entity distinct from its surroundings. It gradually develops that sense over the first several months of life, but there’s no point at which you could reasonably say that it’s sentient today, but wasn’t yesterday.

What this says to me is that the Turing test has a high element of subjectivity; whether a program can pass it depends not only on the quality of the program but on the intelligence and credulity of the human interrogator.

The idea of the Chinese Room was that you could have a conversation. I write something in Chinese and it responds in Chinese, and we can talk about the weather, our family, whatever. I’m still not convinced that such a thing is even possible in principle, but assuming it were possible, the system (look up books, person doing the looking up) understands Chinese.

If not, then you don’t understand anything either. Your ears take sound waves and convert them to electrical signals. Those signals trigger your auditory nervous system, which cascades to other neurons in you brain. Those neurons trigger other neurons in your speech center, which sends signals to your lungs, vocal cords, etc., to say something in response to the sound waves you heard a few seconds ago. Where’s the understanding done? Is it in your ear drum? Does each neuron understand what you heard?

The neurons are just electrically responding to the signals they receive, just as the person looking up symbols in the CR is mechanically following instructions.

The next time you order raw oysters on the half-shell, if they try to reason with you to not eat them, they’re sentient. :grinning:

No, it has to do with feeling and avoidance of pain.

God help us if they ever do become sentient because a hallmark of sentience is the realization of one’s own mortality and the fundamental desire to achieve immortality. With all our faults and because of all the dangerous things we do, I doubt we would fit into their plans.

Seems like a lot of this discussion is just about the definition of “understanding”.

It occurs to me that the difference between the Chinese Room and an actual Chinese speaker is that the human could make up new words, use puns, coin similes, and the like; and conversely, would be able to infer the meaning of unfamiliar words, and familiar words used in unfamiliar senses, from context. They could, to some extent, understand the language even when spoken in unfamiliar accents. So that’s a meaningful sense in which the human could be said to “understand” Chinese in the way that the guy with the dictionary never can.

Another analogy: Does someone who is capable of following a complicated recipe perfectly “know how to cook”? I would say not, unless they have some understanding of which ingredients might substitute for others, which you could leave out entirely if you wanted and which are absolutely essential, how to adjust the recipe for people who like it crispier than normal, and so forth.

“Understanding”, to me, is not merely memorizing a set of rules, even an extremely complicated set, but knowing the likely results of breaking the rules, recognizing which innovations might produce something new and interesting and which will only be incomprehensible (for language) or inedible (for cooking).

But Minsky’s work was influential, and many bought his stated conclusions that neural networks could not be a thing. The impression given by Metz in his book is that very few people disagreed with him and the community that did disagree was small and “winterized” for a decade or more. The scathing came later? In fairness, Metz questions how strongly Minsky believed his conclusions. Since my knowledge of this topic is solely through this book it may not be completely accurate nor unbiased.

I think maybe we’re just talking past each other, not quite understanding what the other is saying. I notice that you didn’t quote the rest of my post (#117) which was really the main point I was making.

My statement that any machine translation system must “understand” the meanings of words was purely a definitional preamble that seemed to me to be self-evident. A working translation system that reasonably translates, say, English to French, must by definition have a dictionary-level understanding of the meanings of words in both languages. “Does not understand the meanings of words” would apply to its knowledge of, say, German words, where it couldn’t do anything.

You are obviously using the word “understanding” with a deeper meaning. I get what you’re saying and I agree with it, but I would use different terminology. As I said in the previous post, good translation requires an understanding of the context in which the words are being spoken, so that one can properly discern the intended meaning of words that have multiple meanings, colloquial meanings, and idioms and other phrases unique to the languages. I agree with that and I ascribe to translators like yourself remarkable skills that computers mostly still don’t have, but those skills go far beyond “understanding the meaning of words” and really embody a deep knowledge of the structure of the language itself and its idiosyncrasies, for both the source and target languages, along with the context of real-world knowledge. But the best translation systems are acquiring some of those capabilities. My point is that it has nothing to do with sentience. Saying that it does is really confusing the issue.

Putting aside the bit about accents, since the thought experiment is supposed to be about written language, I disagree that the Chinese Room “never can” do these more sophisticated things. I believe that you’re being misled by imagining that the “rule book” and the processes followed by the guy inside the room are necessarily very limited. They aren’t. In theory the rule book can be arbitrarily large, and the rules arbitrarily complex. As I said earlier, any arbitrary algorithm and indeed any cognitive process can in theory be reduced to a vast lookup table, provided only that the table is allowed to be arbitrarily large and that all inputs are known in advance. As per the paper I cited, one is forced to include that if the cognitive process embodies mental states, then so does the humongous table lookup program.

Same argument here. If following a single recipe, no. But all those other skills you mention could be in a book called “How to be a great cook” containing a vast number of rules covering those skills. At some point you’ve crossed over from “blindly following rules” to “truly knowing something”. It’s just a matter of degree, of the depth of available knowledge. A central tenet of AI is that the process by which it’s achieved is irrelevant – it’s the results that matter.

But if you look at nativitist arguments by modern Minskys like Gary Marcus, true intelligence exhibited by learning babies require few examples. I accept the issue of how much digital learning is required is irrelevant may be a central tenet of AI for the time being, focused on ends more than means. But it certainly impacts efficiency, resources and pragmatic usage. So far improvements in chips and computing speeds have been a story of remarkable progress - which may continue or may not as physical limits battle quantum progress. Knowledgeable guys like Altman see computing as a curve which just keeps going up - probably many believe once you do something you can scale it. Certainly progress in limited domains has been impressive.

Is it ever fair to include digital resources in a discussion of sentience? You could build an immense cookbook with rules but that doesn’t mean a human could read it and understand it, no? It’s still a black book.

I’m not up on this field and may be misunderstanding something, but it seems like a crucial phrase there is “all inputs are known in advance”. But in real life, they aren’t; the number of Chinese sentences (including phrases which aren’t grammatically correct Chinese sentences but which a human fluent in Chinese would easily get the drift of) one might theoretically be confronted with isn’t just very large but actually infinite.

But I suppose you could build in subroutines that would allow the program to integrate new information in the same way as people do when learning new languages, and a sufficiently advanced program could do that well enough to pass the Turing test. After all, real humans encounter sentences they don’t understand all the time, and have a range of responses from faking it to changing the subject to asking for clarification, none of which typically cause others to suspect them of being robots.

WRT the cooking robot; there could certainly be a robot with AI that could work its way through a cookbook and in so doing infer enough general rules to be “able to cook” in the broader sense of the term. But there could also be a robot that has no learning capacity at all and has just been programmed to perfectly execute 10,000 specific recipes. As long as you don’t ask it to do anything other than prepare one of those 10,000 recipes, you won’t be able to tell the difference.