Request: don't put ChatGPT or other AI-generated content in non-AI threads

The wolves will be coming for you.

But the problem goes beyond just not having context that this is or is not a riddle; it is a more fundamental problem than just needing a more expansive training set or adding enhanced RAG parameters. It gives a nonsensical response that anticipates a puzzle because the form of the question is presumably similar to actual riddles in its training set even though the naive response to the question (that a child unfamiliar with the brainteaser would give) is just that the man and goat get into the boat and paddle across the river, and does so in a way that a cursory reader might think is authoritative.

It is obviously nonsense in this case to anyone (I hope, at least) but when it comes to answering questions about more complex phenomena it my not be at all obvious that the result is gibberish, and out of ignorance of the correct answer someone may take the result as truth. Which is a real problem when people start applying these LLMs and other generative AI to solve critical real world questions and problems.

Stranger