The llama is a quadruped which lives in the big rivers like the Amazon. It has two ears, a heart, a forehead, and a beak for eating honey. But it is provided with fins for swimming.
Your link points to the llama-related post, because your actual link target went away into the corn. Which was a good thing, a very good thing @What_Exit did. Yep, a good thing.
There’s a newbie drive-by idiot over in the “disgraced convicted felon” thread (Meep) but it’s the Pit so it’s fine. We’ll see if they bother to come back and if they’re smart enough to stay in the Pit.
Even though Varl_Hobe has been banned as a trock, one salient thing about that “Anti-White Male” travesty is that his posts were obviously written by AI. Anybody who has spent much time talking to AI chat will immediately recognize the style. A little too polished considering how rough the drafts usually are around here. A little too efficient at connecting thoughts in the logical order to build an argument, whereas real people’s thoughts wander in between the relevant bits. Why, as I was just saying to my grandson the other day… Never mind, back to the topic.
The AI is very generous at putting quotation marks around terms repeated from the prompt. That’s perhaps the most obvious tell. The em dash is sometimes taken to be characteristic of AI writing. Well, the use of em dash in that thread looks like an example of that. Another characteristic of AI is to end by turning the question back around and asking questions to lead further into the topic.
The thread was closed before I read it, or I would have said this there. If somebody tries that trick again, we need to call them out as soon as they start.
I didn’t look at the thread because the topic seemed stupid, but I must say that’s a very perceptive observation and, from what you’ve described, likely correct about the content being copied directly from an AI chatbot.
The poster seemed very single-minded. My take was it was a human who had a particular gotcha in mind and he(?) was going to keep pounding his one nail until we all gave up and agreed he was right. Sort of a variant of “just asking questions”, but not trying to Gish gallop all over the place; just rejecting every answer as “Not good enough; try again.”
The sad truth is that he did indeed pose a genuinely difficult problem that will be genuinely difficult bordering on inmpossible to solve.
Another thanks to @Johanna for alerting me / us that that single-mindedness is a decent tell for AI content. I don’t interact with chatbots at all, so don’t have the familiarity.