Request: don't put ChatGPT or other AI-generated content in non-AI threads

I can see the advantage of having a suitably trained and validated chatbot generate basic glurgage for a press release, or as an initial draft of a legal brief, or to produce technical documentation (assuming someone does a thorough job of reviewing it and checking facts and references) but introducing it into an open discussion seems pointless unless literally nobody has any useful ideas. I’m sure that the technology will evolve to the point that it won’t require tailored prompts and it can just engage in free discussion but I don’t think I’ve ever wanted to have a debate with my toaster oven or cared what my refrigerator thinks about the future of the Panama Canal.

I do think that “AI” will be crucial in developing certain ideas in physics and mathematics that are just seemingly beyond the information a human can hold in their head, and potentially useful in a number of contexts as a more flexible expert system than purely rules-based knowledge agents can ever be but there is a long road before systems can be reliable enough to be trusted to provide useful results (and distinguish between ‘truth’ and nonsense), and it is completely unclear at this point how we can develop a framework to validate the reliability of such systems.

Stranger