Request: don't put ChatGPT or other AI-generated content in non-AI threads

Why do you think some arbitrary website designed and run by fallible humans is necessarily going to be more reliable than an AI? The only reason I can think of is that incorrect website results are persistent and will eventually be called out as wrong, but that’s neither an indicator of intrinsic reliability nor a guarantee of success in correction of factual errors.

Humans provide information and opinions based on perceived evidence all the time, just like AIs. However, humans also provide that information based on selfish motivations, including a willingness to distort basic facts and lie about them in the promotion of monetary or political self-interest. Should humans also be banned from message boards for posting content that sounds good but isn’t right, according to some possibly arbitrary definition of “right”?

A website like the one I linked to is more easily tested on simple cases, and is purpose-built for one set of tasks - a considerably easier assignment than “answer any kind of question.” It might be wrong, but it’s verifiable.

Absolutely, humans can be wrong, biased and deceitful. I have no particular opinion about whether any individual human ought to be banned - or whether ChatGPT or the discobot ought to be banned here. I was simply providing information on a strategy for finding a good answer that I think is better than asking Bard, etc.

But I will note that while some humans are wrong, biased, deceitful, other humans have an interest in providing good information - and I’m not convinced that any ChatGPT-like program has yet been designed that has the same interest (see this experiment Can ChatGPT Answer Story Identification Questions? - Science Fiction & Fantasy Meta Stack Exchange).

The thread title is overbroad. Here’s my proposal for best practice: “Outside of AI threads, always cite significant use of AI chatterbots, but go through the output with a fine-toothed comb. Strongly consider characterizing the results with due skepticism; pare down the direct chatterbot quotes, quite possibly to zero.”

In short present the AI perspective as one made by a sometimes useful but always unreliable source, a source that is unreliable in atypical ways. So for Og’s sake, don’t use AI chatterbots as a conceptual shortcut.

Meh. Given the current state of the field, I stand by the OP request. I think we’ve seen several examples where people thought they were following the practice you described, but weren’t able to do so; and I think there are people here who have an unreasonably sunny view of the current state of AI, and who can’t apply the appropriate degree of skepticism.

Better to err on the side of not including such input in threads where it’s not the topic of the thread.

I think we’re here to fight ignorance and while LHoD’s proposal would improve the quality of the average non-AI thread (IMHO), that’s not the only consideration. Chatterbot technology is rapidly evolving, and figuring better ways of utilizing it advances the board’s mission. So the trial and error process (mostly the latter) should be instructive.

Sure these chatterbots will probably destroy the world as we know it, but we have no control of that in this corner. What we can do here is work out the best way to use them, in the days prior to Skynet’s activation.

Does it? How?

Generative AI is a great tool for certain tasks and it is getting better. But the best hammer still makes a lousy screwdriver.

If we have mission other than entertaining each other it is to reduce ignorance. Our own most of all. Knowing how to critically think, developing our skills by playing at that together, is core.

How does that tool help that in its current form?

So make as many threads as you like explicitly devoted to exploring the use of chatbot technology.

  1. It will be different/better in 3-6 months, so guidelines will be quickly outdated.
  2. I was able to use Bing to find a list of candidate wishlist generators, something that search wasn’t as good at.

I haven’t explored the chatterbots all that much, but I think the burden of proof is on those saying they currently don’t have any constructive application on this board. I hypothesize that I located at least one, not including my wisecrack example. Sure I could hang out in the dedicated AI threads, but there’s nothing like data generated in the wild.

What I thought people would object to was my argument, “Sure, an outright ban would improve the non-AI threads on average, but there are broader ignorance fighting issues.” I suspect the concession that yes, “…there are people here who have an unreasonably sunny view of the current state of AI, and who can’t apply the appropriate degree of skepticism,” [1] would end the argument for most people. Not for me though.

The main downside for me is that this adds the workload of our voluntary mods. I don’t think the burden is large (though I could be wrong) and I think taming one of the great developments of our day is a worthy use of their time.

Agreed. Learning the many ways that LLMs steer us in the wrong direction, fights ignorance.

Personal points discussed here: The Miselucidation of Whack-a-Mole - #89 by Measure_for_Measure

[1] Quote from LHoD.

Of course it could be instructive. If there’s a thread on, say, the evolution of dogs from wolves, someone might ask ChatGPT about the topic and post the answer, and then we can explore and discuss the accuracy of ChatGPT’s answer, and the wisdom of posting it.

Now the thread is about ChatGPT, and not about wolves and dogs.

To reiterate:

I agree! Now who has said that, so that we can burden them thusly?

You can start here, straight from the OP:

people will post the results of a ChatGPT query. And I wish they wouldn’t.

Followed by itemized reasons why they’re a terrible, horrible, no good, very bad thing to ever post on the board (except, of course, when specifically discussing how stupid AI is).

Seems distinct from “they currently don’t have any constructive application on this board.”

LHoD is pretty clearly advocating for strict guardrails, not for an outright ban across the board.

Indeed. @Wolfpup is hyperbolically misrepresenting what I said, and selectively quoting me (the bit he quoted was part of a sentence that began, “In threads on unrelated subjects”, a bit that dramatically changes the quote).

His reasons for creating such a misrepresentation aren’t relevant, but I think my responses to him will need to remain in the Pit. I just hope that others understand that I’m not saying what he’s claiming I’m saying.

As to FQ …
If you’re competent to vet the chatbot’s output for accuracy and correctness, you’re competent to write your own response from scratch. And if you’re not, you’re not. An expert-written response will be better for comprehending the problem, not stochastically whacking away at the problem.

The real value in FQ is to hear experts on whatever topic speak authoritatively, and to the nuance points of the OP & subsequent commenters. Not to the gross gloss that chatbots often provide, particularly without expert prompting.

To be sure there’s lots of human speculation and anecdote posted in FQ too. Not every response is a SME dissertation. But the human posters are much better about saying explicitly, or showing by their vocabulary or the way the story is told that they are not SMEs on this topic.

Bottom line for FQ: leave the chatbots out of FQ. They have negative value. They are simply a disguised version of “let me google that for you”. Which is rightly frowned upon by the mods as ultimately disrespectful to the OP. If you care enough to write a post, do so. If you don’t, then keep reading.


For all the other forums …
We care what you think. Or what you know. Or what you learned or your experiences. Your opinions. Not the opinions of some app. This is about humans yakking for fun, entertainment, commiseration and mutual informal enlightenment and experience broadening.

The Cantina on Tatooine did not allow droids in where the bio-folks were drinking & carousing. “We don’t allow their kind in here.”

IMO we don’ need no steenkin’ chatbots cluttering up our Cantina. I didn’t come here to “talk” to a computer.

Bottom line for not-FQ: leave the chatbots out of not-FQ. They have negative value. They are simply a disguised version of “let me google that for you”. Which is rightly frowned upon by the mods as ultimately disrespectful to the OP. If you care enough to write a post, do so. If you don’t, then keep reading.


Bottom bottom line for everywhere: leave the chatbots out of SDMB. They have negative value. If you care enough to write a post, do so. If you don’t, then keep reading.


ETA: Oh yeah, one more thing … Get the hell off my lawn!!! :grin: :crazy_face:

I pretty much agree with that. I don’t want a blanket ban on chatbot output, but it should only be used when it’s contributing something unique. For example, if someone says it’s impossible to write a poem using only words starting with X, the easiest way to test it would be to ask ChtGPT to do it. If it can, I see no reason why I shouldn’t post the result as a refutation.

The grey area is that the poster may be offering this as a challenge to other posters, so using a chatbot would be violating the spirit of the thread and disallowed.

If we get to the point where every time someone asks a question another user runs to ChatGPT to cough up an answer, we’re doing it wrong.

Agreed. To be explicitly clear, here’s two different hypothetical posts:

A ChatGPT response is fine. (And no, I’m not asking, so if you decide to query ChatGPT on the subject, kindly post the response elsewehere).

A ChatGPT response would be pretty annoying.

Queries like the first are really, really rare, though: they’d consist of factual queries whose answers are immediately and unambiguously either true or irrelevant. I’m not sure in my quarter century here that I’ve seen a single question like that. Posts like the second are everyday posts, though.

Yeah, agreed.

That’s a reasonable example of a bad application of ChatGPT on this board. The wolves/dog thread shouldn’t be about how well AI did in answering the question. (It would be fine to discuss that in a catch-all spinoff thread).

Furthermore, if the mods find that in 6 months, 100% of ChatGPT usage has been bad, then we can move on from there.

I thought DSeid, the guy I was replying to, was saying that. I mean if people are always using ChatGPT as a hammer when a screwdriver was called for, that would be bad. If everyone agrees that there is scope for constructive use of ChatGPT in non-AI threads, then we’re golden.

But frankly that’s not obvious. With the understanding that I’ve done very little with this tool, here are some possible uses of LLMs in non-AI threads.

  1. LLMs have novelty value. If that was all, I would agree with the OP. OMG yes.
  2. LLMs can provide first drafts. If so, it would not be inappropriate to cite their use. Possible objection 1: yes people use them in that way, and they are wrong. Possible objection 2: we don’t care who your first grade teacher was, and you really don’t have to detail your workflow. At any rate, I’ve never experimented with LLM in this way, but if I did I would print out its output and mark it up in pencil. And credit its help, maybe.
  3. LLMs can substitute 100% for human thinking and creativity. Time to throw in the towel boys: it was a good run while it lasted.
  4. LLMs can be used as a brainstorming device. That’s how I’ve used it (a little). When google fails, then wikipedia fails, then the standard references such as PC World, Wirecutter, Consumer Reports, this message board, or Reddit, or Quora, can’t get me what I’m looking for I pursue volume, not quality. The idea of brainstorming is that you come up with 10 ideas, the great majority of them bad, in order to find that one good lead. Eliza doesn’t work too well for that (are you here because you want your question answered mfm?), but stochastic parrots can sometimes help. That’s what I did when I was looking for alternatives to Amazon’s wishlist. That’s what I tried to do when I was looking for career prospects for uneducated former congressional reps, though Bing (reasonably) refused assistance.

There are other narrow uses for AI, ones that hopefully won’t derail threads, or can be moderated so they don’t derail threads. I suspect they are not common, at least currently. But methinks the rarer ones are worth teasing out.

I actually have considerable sympathy with the idea that we need to be careful about how we use AI on this board, and I think posters like @LSLGuy and @Measure_for_Measure have recently stated the case well. I admit to having little patience with how recent AI breakthroughs, and their important utility, are being unjustly denigrated, but that’s an entirely different conversation.

What I mainly want to address here is that I believe I quoted you fairly and in the proper context. Misquoting is a serious violation of one of this board’s cardinal rules. Is that honestly what you think I did there? The only thing that the omitted preamble adds to the context is that it should be OK to quote snippets of GPT responses when – and only when – discussing GPTs – which is a self-evident truism. But the essence of your OP is that one should not actually use them when posting in any other context. Which, again, I have some sympathy with because I can see this potentially being abused. But I absolutely did not misrepresent what you said.

I agree with him that you selectively excerpted his words in order to simplify his argument and make him sound foolish so you could have a straw man you could knock down in order to keep cheerleading for the chatbots.