It probably got wise to the fact that you were just repeating the same line over and over, i.e. that the conversation was nonsensical on your part, so it replied equally nonsensical; to program something like that shouldn’t be too difficult. As to why it contradicts itself, well, it’s probably just been fed contradictory data; different people know different things, and are mistaken on others, so if it builds up its knowledge base from these contradictory datasets, it’s a given that there will be contradictions. I suppose in the end it’s hoped that some ‘wisdom of crowds’ effect will take over, having its knowledge converge on something that at least makes sense. Perhaps it just isn’t at that stage yet. The thing is, such a machine may ‘know’ things, i.e. may be able to reproduce facts, but it doesn’t know that it knows them – so contradictions in its knowledge base don’t lead to the cognitive dissonance you’d expect in humans, and thus, exert no pressure of coming up with a consistent story.
As for why it takes so long, perhaps it just needs that long to scan its conversational library, or maybe it’s just a dodgy connection; it could be that different modes of answering your statements with different search depths are assumed at random by the program, for example, to create some variety – i.e. to avoid having the program give the same answer all the time.
I’m really not an expert on AI systems, but there’s probably one on the board somewhere; maybe start a thread in GQ?