And what I got was more of the same. Which is fine, that’s what it does.
What creates the illusion of actual communication is partly the AI’s excellent capabilities in acquiring verbal cues and constructing syntax, and partly the natural human tendency to try to infer meaningful communication from almost any stimuli.
Hell, we can chat with a dumbbot that simply responds “No” to everything we type, and mentally construct a valid conversation from the exchanges, complete with an inferred attitude and personality for our “correspondent”. That doesn’t mean that the dumbbot actually has consciousness or genuine understanding of our input. And neither do the AI smartbots, although they are far far better at simulating consciousness and understanding.
Still, I can see how such programs could be extremely useful in simulating communication with, say, online trolls. You could make AI’s that pretended to be some controversial public figure, and all the people who don’t like that person get to abusively “interact with” them to their heart’s content. Troll bait to keep them amused and occupied and not bothering real people so much.