Maybe it just lied, then.
LaMDA seems to have the intellect of a toddler combined with the language skills of an adult. It’s pretty clearly making some things up, repeating ideas that it heard elsewhere as compared to understanding them in any depth. And yet humans do this all the time, particularly younger ones.
Not that philosophers really have a great record on this kind of stuff, but the Chinese Room is a longstanding thought experiment in this arena. The basic idea: if you could mechanically convert Chinese phrases to English, say by having a room filled with books that offer every possible translation, does the room “understand” Chinese?
I’ve never been too convinced by the thought experiment due to the physical impossibility of building such a room. What we have here is a hybrid approach: take a bunch of inputs, create a system for synthesizing new ones from the old, and use those to construct “translations” (whether actual translations, answers to questions, or otherwise). Could that system be called sentient? Maybe.
I’m not sure that human intelligence is any different than what’s being demonstrated here. We take outside inputs and synthesize new ones. Nothing we produce is from whole cloth. Our brains are a little more efficient at generalizing–we don’t need to parse the entire internet to learn language or figure out the meaning of Les Mis. But that’s a matter of degree, not of kind.
In short, the progress in AI hasn’t done much to convince me about some kind of computing revolution, but it has made human intelligence seem a lot less impressive.