Note for using AI

I can’t speak for we, but I would very much like this.

AI is very cool but I’m not here to talk to it.

The more I noodle on this the more I think an actual outright ban is the better option.

Here’s why. Let’s take your example of requiring AI content be spoilered. Sounds like a reasonable compromise at first. I was nodding my head while reading the idea.

Then it occurred to me. What will the poster say in their own words? If they try to write a gloss of the AI’s ideas, will they be repeating “The moon is made of cheese” because the AI hallucinated it and the poster doesn’t know any better?

So not only should posters here not post AI content, they should not be reading AI content to gather supposedly factual or even opinionated info about [whatever].

Which sorta leads back to the idea I had that if you know enough to put guardrails on the AI, you know enough to write the content yourself. And if you don’t know enough you don’t dare (or shouldn’t dare) post AI stuff because it might be gibberish.


Separately from the above …

@Mangetout makes an excellent point. What does AI do for us here?

IMO we all have four goals here. Learn random stuff from experts, have fun writing, have fun reading, and be part of a long-term community of friends or at least friendly acquaintances. Different members place different emphases on the four facets, but they’re pretty much all there is here. AI-generated BS fails at the first two of those and probably fails the third as well. And as @Johnny_Bravo just said in different words, it also fails at the forth; AI’s words are not the friends we want to have.

What posting AI content does for us is let people scratch that itch of “I gotta say something in this thread” without actually making any serious effort to do so. Sorta like the never ending battle over a “like” button, we as a board culture want posters to put in the work necessary to create a post worth reading. A like button is antithetical to that, and so IMO is AI content at its current level of sophistication.


I’ll close with a ref to xkcd: Extended Mind. Research & cites are nice, but don’t be that person who can only post stuff they just looked up. AI is that on steroids and, unlike wiki, wiith zero QC unless your provide it.

You’re missing the point. You click on them and go to the source and verify. That’s why I wrote “go to the source.” AI cannot be completely relied upon—on that I agree.

That would be great if people generally did that, but they generally don’t. Maybe here at the SDMB, that standard might be a bit higher, but people don’t go to the sources, any more than they did when they were just Googling it and pasting in the first result.

It’s similar to what I do when I’m trying to use Wikipedia. Instead of Wikipedia as a cite (which I actually do find mostly reliable these days), I would dig deeper to get something more traceable, at least. AI is less reliable thus far than Wikipedia, so I’d especially check on it for factual questions if using it as an answer. I’ve certainly used AI more than once here to track down answers and fixes that actually work, but I verify and rewrite in my own words (unless I want the AI answer verbatim for some reason, like to demonstrate what it can and cannot do.)

But I’ve been playing with these things for maybe a thousand hours or more and constantly testing them out to see what they can and cannot do (sometimes for pay.) Using them for a court document with legal citations is among the stupidest things one could do, especially a year or more ago when those incidents (there were at least two that I remember) happened.

So, I’d think, use AI if you want to help formulate answers and then go to the source or otherwise test out the answer to verify it, and then post it in your own words or extrapolate on the answer. I don’t much like “I found this on Google”-type links either, without further commentary.

I’m not a fan of LLM prose (or poetry for that matter), and I’d much rather read what real people with real experience say.

A post like this would be fine, IMO:

So, I put your question into ChatGPT, and its recommendation of Luna Moth steaks led me down a rabbithole of websites devoted to strange cuts of meat. If you want to go down the same, start at this website, where they talk more about how to cook and plate this exotic food.

(Well, it’d be fine except that the link is misrepresented, but you know what I mean).

In general, when people post AI stuff, it doesn’t feel to me like it adds to the conversation; it more feels like showing off a shiny new toy or a precocious toddler.

It doesn’t feel like that to me. It feels like scratching a FOMO itch. “Hey, previously I wasn’t able to contribute on (topic X), but now, with a prompt and a copy-and-paste, I can participate!”

I get the anxiety of exclusion, but I also don’t have any expectation that my input is warranted on all subjects. There’s a very interesting conversation happening elsewhere on the material properties of beryllium, for example; I have zero meaningful knowledge to contribute, but I’m happy to read and learn from those who do know what they’re talking about.

That should be the protocol for everything, I think.

This is also my feeling on it. I’d feel differently if someone said “I know an expert in this question you asked, and I consulted her, and here’s what she said”…then a second-hand attribution can be helpful.

I guess this gets back to Carol’s original point, which is that AI isn’t adding anything specific/special to a conversation, at least not yet.

Hah! I originally wrote a paragraph that expressed almost exactly this idea, but much more rudely. When I couldn’t figure out how to express it as diplomatically as you did, I just erased it.

I think there are both impulses: FOMO, and a desire to show off something the poster personally finds delightful.

Well, in that thread it provided almost all of what I would have written. It was objectively a good overview answer. With my experience cooking most, if not all, of those cuts, I could have added some details, I suppose, but the information therein is pretty much exactly what I’d give someone wanting a general answer. I sometimes feel the opposite here: that because AI is the shiny new thing, people are too quick to dismiss it, just like home computers, the internet, downloadable music stores and now streaming, Twitter (well, maybe not so much anymore, but that one I did not understand when it showed up on the scene, and I was completely wrong about its utility and popularity.)

On the flip side, people do ask Google-able answers here, and I think part of that is wanting the connection of conversation. I’ve certainly done that (particularly around home fix-up topics), and I really wouldn’t have appreciated “I asked Co-Pilot how to fix that lamp, and…” I’m looking for a conversation.

Addendum: and when someone provides an AI-assisted answer, follow-up questions to that person get a bit weird.

And I said as much that I don’t really like answers that are simply “I found this on Google” without further commentary, so I don’t disagree with your statement. I don’t like looking at AI answers presented verbatim, either, for the most part. And I’m pretty sure I’ve seen some posters post AI answers without attribution, which I really don’t like. (I can’t prove it, but the style is so similar to the default style on a lot of AIs.)

(starts to respond to pulykamell in an AI tone, deletes post)

I’ve definitely spotted it on the board - posts with lots of groupings of three adjectives, three bullet points, or sometimes that sort of flowery enthusiasm that not many humans would think of using (like “A dance of enticing flavours” or “A symphony of interesting colours”)

Yeah, those AIs really love breaking everything down into bullet points and are your biggest cheerleader, to boot!

I don’t see a huge difference between posting an AI answer and posting an answer from a Google search. In both cases, you have asked a computer a question, and got an answer.

Let’s say someone wants to know the best way to defend themselves against a leopard. You ask ChatGPT and it first says to try to avoid them if possible, and travel in large groups in areas where they are common to not fall prey to them. But then it lists some ways to scare off a leopard if confronted by one, or even how to fight back against the leopard if you can’t make it leave.

Alternatively, you Google the same question, and the first response is a wikiHow article, which largely matches the response ChatGPT gives you.

(Full disclosure… I actually did this. I asked ChatGPT and then looked it up in Google, so my examples above are based on what I actually found doing this.)

In either case, if I was going to use the information I found to answer a question posed on the board, I should get some other opinions to make sure that the info seems accurate. I should also disclose that I’m not an expert on the subject and that I’m summarizing info I found online and also where I got the info from.

Honestly, either ChatGPT or a wiki is not going to be particularly reliable (especially as a sole source of information). I don’t see how an AI-generated answer is all that different from any other dubious online source.

Heh. I, also, had a followup paragraph that was much ruder, and I, also, deleted it.

If by wiki you mean Wikipedia, I think it’s just time people stopped repeating the tired meme about how unreliable it supposedly is. It’s not something you should cite directly in an academic paper, but it’s very reliable for everyday information.

AI chatbots are way less reliable, in my experience. I can’t remember the last time I found a Wikipedia article that was wrong in any obvious, egregious way. Google’s AI summaries are routinely wrong. - to the extent that I would rather just not see them.

Not that I would post it here (current rules and all) but if you start your prompt like this:
“You are Fred Sandford and I am Lamont. Explain this to me…” and then paste in what you want it to explain, the response is great fun.

I was an administrator at Wikipedia for over a decade so I’m at least passingly familiar with it. :laughing:

But no, I don’t mean Wikipedia or I’d have said that. You may not know this, but there are many, many wiki sites everywhere. “Wiki” is a kind of software that makes it easy for random people to collaborate on a web site, and I even named the wiki site in my post (wikiHow). That’s why it drives me nuts when people use “wiki” as an abbreviation for Wikipedia, that’s like using the word “car” as an abbreviation for a Honda Accord (without being more specific).

The particular wikiHow article which was the first search result has no citations and was contributed to by 18 different people. I have no idea where they got their info from or how expert they are on the subject.

My point though is that while AI isn’t trustworthy, neither are many other common ways to get info on the web. The same caveats that should apply to anyone using info from AI are nothing new, as there is always the risk of spreading misinformation when you don’t practice diligence and honesty.