Request: don't put ChatGPT or other AI-generated content in non-AI threads

No, it isn’t. It’s generating text based on all kinds of input, but is not in any sense ‘searching’ them and the output could be directly contradictory to what those inputs were.

Unless you’re talking about that Bing thing or whatever, but that’s not ChatGPT and AIUI outputs cites.

GPT4 actually does an on line search using available search engines. It is a user interface.

If I want to argue about what ChatGPT is, I can post this:
https://www.google.com/search?q=chatgpt

which is actually more useful than posting a response from ChatGPT, because at least it has links that you can follow to figure out what various sources have to say on the subject. A ChatGPT textblock lacks those links, so I don’t know what sources ChatGPT is using to string words together probabilistically.

I don’t ever need to see someone’s search results, whether it’s a link to a Google results page or a ChatGPT response. It’s no more a cite than a page of Google results is a cite.

Also using chatgpt to write an argument for you is… i don’t know a bit bizarre.

People should be expressing what they actually believe here. If you lost some generated text, is it actually what you believe or did you need to be told what you believe? Is it just supposed to be a substitute for providing actual evidence? Are we supposed to try and refute it, or just ignore it? Should we do your work for you and figure out if it’s accurate or where the ai got it from.

Not to mention it’s painfully easy to trick chatgpt into saying whatever you want it to say.

Are you saying that all this AI stuff is just hype?

[checks my posts]
No, it does not appear that I am saying that.

The post does.

This is needlessly combative and immaterial. If you insist I’m saying something I’m explicitly not saying, further conversation with you won’t be productive.

ChatGPT is not a search engine, but it’s also not just a ‘stochastic parrot’ stringing words together plausibly. It’s an actual AI that forms concepts and associations in its ‘brain’ just like people do.

But it doesn’t work like people do. It can’t check its results as it goes, because it’s a single pas-through computation without recurrency. So if an error creeps in, it treats it as any other token and can suddenly hallucinate a whole story around the error.

There are techniques to minimize this when prompting, but it’s still a bit of an art.

Newer GPTs that use Q-Learning, or GPTs that use agents to check their own output, are going to be much more accurate, but still not perfect.

Think of a GPT like a brilliant friend who seems to have read everything and lots of great insights, but who will give you an ‘answer’ whether they really know it or not, and they have no capacity for knowing if what they are telling you is bullshit or not. Most of the time what they tell you is true, and interesting, but they lie enough that you can never trust what they are telling you without checking first.

One of the best applications for a GPT is writing stuff that you already know. A classroom handout, maybe, or a summary of a meeting or a set of emails, or anything else that is obviously right or wrong but still tedious to write.

Here’s an example: When writing a paper, putting all your citations in APA format is a pain. ChatGPT can go through your paper and reformat all your cites properly, saving you time for other stuff. Or converting handwritten formulas into LaTex. You’ll know immediately if GPT screwed up, but if not it saved a lot of tedium.

You are correct so I withdraw.

It’s even worse than “zero value”; it can present counterfactual information in a way that appears authoritative to someone who is not knowledgeable enough to vet it, and can be used to influence or intentionally mislead by malicious actors.

I desperately hope that you are wrong but I fear that you are likely correct in this prognostication. It is already outrageously easy to cut & paste inaccurate information or “retweet” conspiracy nonsense to social media without even doing a cursory sanity check, and using a chatbot to generate responses or ‘find’ (read: manufacture) citations are going to make it even worse. As these systems become even better at being ‘deception agents’ it is basically going to be impossible to vet them without going back to source information where that is even possible.

That being said, there are people here and elsewhere who are absolutely infatuated with generative AI and the ‘magic’ it does in almost instantly translating their whims to a cromulent product with minimal effort, and having a rule saying that they can’t post some blurbage from ChatGPT or an images generated by Stable Diffusion is just going to cause them to post it anyway except without attribution. When I see a poster repeatedly and spastically posting output from a chatbot just because they don’t have the insight to produce something themselves or don’t want to put effort into formulating their own ideas, I’ve taken to just putting them on Ignore because they’re just not providing anything I want to spend my time reading. And I can see the day coming where that is more then rule than the exception.

Stranger

I appreciate that.

Overall, I confess I’m surprised at the broad agreement. I was expecting a lot more pushback, given how we’re basically an ornery bunch of cusses. Apparently most of us are ornery in the same way on this issue.

Google just introduced their AI, Gemini. According to their presentation (which is, admittedly, a big advertisement produced by them so grain of salt) it is better than GPT-4. Google claims (video linked to the correct spot) it is better than any other AI currently out there and, when tested against human experts in 50 different subject areas, it was a smidge better than they were too.

Does that mean it is better than pretty much anything you will find searching the internet? Not sure.

The demonstration in the video linked just above is amazing and a little scary (scary in how adept these are getting).

Take it for what it’s worth.

Jump to @5:00 in the video below to see a demonstration:

Yep, we’ve been talking about Gemini and the video in the ChatGPT 3.5 thread. It turns out that the video was edited to make it look real time, but the abilities are real.

It’s not a huge improvement over GPT-4v, but it has the advantage of having access to all of Google’s data. We can assume it will get better. It’s good to see a peer competitor.

X also released Grok today to its US Premium+ subscribers. It’s more of a ChatGPT3.5 level LLM, but it has one huge advantage: access to the twitter database, which it apparently ingests in real time. So you can ask it questions like, “was there any news about Ukraine attacks in the past hour?” Or “How many players in last night’s basketball games scored more than 10 points?” Or “Summarize what people on X have been saying about the press conference the President gave this morning.” Access to real time data and human conversation around it could be a game-changer for a bunch of use cases.

I don’t quite agree with it on the art level. Proper use of generative AI requires creativity.

Also, I think it can be funny to use it for a one off joke. The joke itself shouldn’t be created by ChatGPT, but it can make sense. It would be no different than going out and finding an image…

And I’d rather read ChatGPT than a lot of the one-liner dismissive comments that aren’t jokes. As long as it is labeled, I don’t have a problem with it. I can always skip the post if I’m uninterested. I wouldn’t want people just feeding questions into the bot, or even feeding the OP or anything. But I would treat that more on an individual basis.

There was a joke I wanted to make that I would need external help to make. But I’ve refrained because I would feel I ethically had to specify the part that came from ChatGPT.

Bunch of ornery luddites. These things can be useful.

At work, we often want to know what companies used to own or br owned by what other companies. We ask chatGPT. It’s right about 2/3 of the time. In every case, it’s worth googling the names of possibly related companies it comes up with to see if they actually are related.

As mentioned above, it’s great for formatting citations. Ditto first drafts of various boiler plate texts. Even something like a thank you note. You probably want to edit it a bit.

It’s best if you know something about the topic, and can work with it. It can be misleading if you use it “blind”.

I agree with that. “Don’t put AI content in non-AI threads” seems over-broad. I think the rules should be more like:

  • Don’t cite an AI as a trusted source
  • Don’t hijack threads with off-topic AI nonsense, but… don’t hijack threads at all.
  • Identify anything coming from an AI as AI generated and what model was used to generate it.
  • Only use AI content if it adds something extra to the conversation. Using AI to substitute for thinking up your own comments is not allowed.
  • If you post AI generated ‘factual’ content, the onus is on YOU to make sure its factual. Don’t post any AI generated ‘facts’ before checking them yourself.
  • Respect the wishes of the thread starter. If they say, “No AI content allowed in this thread”, you will be kicked out if you do it anyway. I could see this being required for poetry, fiction writing and photography threads, to keep people from cheating.

The problem is that this is almost exclusively the way people use it; they basically post: “I typed ___ ____ ____ into ChatGPT and here is what it returned,” and my reflexive response is that I can type shit into a prompt, too, but it doesn’t reflect either what I might think or reflect any kind of public or expert opinion on a topic, so why should I care? It’s just a decay in the market of ideas and knowledge in favor of just producing ‘content’ for the sake of taking up space. I’m trying to recall any time that I’ve seen a chatbot response that offered anything novel or interesting.

Stranger

I agree for the most part, I mean especially in FQ threads; it’s bad enough when someone just regurgitates a Google search result in haste to answer a question they don’t really know anything about (and thus aren’t able or motivated to check the search result for truth or relevance); ChatGPT responses are worse still than that, simply because they are often just wrong, but written in a way that tries to gaslight the reader into believing the wrong thing.

One place I think AI generated content is probably OK outside of explicitly AI themed threads might be something like when a thread has run to the point where it has decomposed into general chat and silliness; if someone says (I dunno) “Wow, that sounds almost like something Sherlock Holmes would say, if Sherlock Holmes was an amoeba”, I wouldn’t think it completely out of place for someone else to post an AI generated picture of Amoeba Sherlock Holmes. It might not add value in everyone’s estimation, but there are any number of threads where that’s already happening and there’s a lot of stuff being posted that probably not everyone thinks is really adding value.
Or in other words, Thanksgiving soda has its (limited) place.

Yeah, I guess I need to phrase that better. Maybe something like, “Only post AI content if it’s relevant to the thread, identified as AI content, and brings something unique to the conversation that couldn’t be sourced any other way.”

I dunno. That doesn’t seem quite like it, either. The idea is that some AI content may be useful, and we shouldn’t disallow that, but substituting it for your own thinking by simply asking it the question at hand and giving its response is not it.