In this thread, I asked for ideas about how to cook the gajillion different cuts of beef I have acquired. @Monty helped by generating an AI response with their “new toy,” an AI assistant.
I greatly appreciated the thorough answer, regardless of which particular AI was involved. Though I assumed from what they wrote that Monty had access to a “professional” AI assistant, meaning a service they paid for, not the free crap we can all look up if we want.
Anyway, Monty got noted, and was gracious about it. My question is: do we really want to have a blanket rule against using AI to answer questions under every single circumstance?
I don’t much care one way or the other, though it seems a little silly to completely ban use of AI, as long as people are up front about it and using it expertly (by which I mean both figuring out the best prompts - as with Google, not everyone has the same skill level at that - and utilizing a paid AI system).
No, I don’t think you understand my question. I am asking if the only thing that exists is free-to-everyone, low quality AI. Or, if there is professional quality AI that people pay for, that is presumably being upgraded regularly, and that can be used to produce better results by people who know how to use it.
If the answer is “yes,” the better stuff exists, then it seems to me that not letting people use it is like saying it’s against the rules to use the library to find information. Because you can find a lot of crap at the library too, and in theory we all have access to a library, but if I’m a skilled researcher living right next door to the Library of Congress, I’d be in a much better position to assist a Doper with a history question than someone whose idea of getting the answer is to grab a historical romance novel from the local drugstore.
ETA: A couple more posts snuck in ahead of me while I was writing, so this is sorta out of phase with the state of the convo just above me…
What I meant was that if a potential poster has the skill & knowledge to proofread the AI output before posting it blindly, they have the skill & knowledge to write a better response. So they should do so.
Those that don’t have the skill or knowledge to proofread the AI output have no business posting AI output. Because free or pay-for here in early 2025 it still emphatically needs human QC before use.
IOW … Either way, raw AI output is not helpful.
I will admit of an intermediate case I ignored in my cited post. Call it “the excluded middle”. To wit:
A would-be poster has the skill & knowledge to proofread the AI, but does not have the give-a-damns necessary to write a dissertation for you. So they skillfully ask an AI to do so, give the response a quick scan for gross hallucinations, and post it with AI attribution.
At that point it’s (probably) win-win for everyone. You get a useful post you would otherwise not have gotten, the responding poster gets that warm feeling of helping someone without having to spend great effort doing so, and the rest of us get curated AI results to learn from.
So I’d now propose the answer for SDMB is all AI content must be attributed, and the poster is strictly responsible for proofreading and filtering out any hallucinations. No hiding behind “I didn’t say the Moon is made of cheese; the silly AI said that, and I just C&Ped it here. So totally not my fault.” BS on that; it’s under your username so you own it.
I don’t have a strong opinion on this either way, but did want to share an observation. Separate from the correctness issue (hallucinations), it seems like most AI will default to a certain tone and voice (absent a prompt to the contrary). Places that allow unedited AI copy, even if factual, can start to sound pretty, well, robotic.
Why do you say that, Dave? I have the greatest enthusiasm for the mission of spewing half-formed non-sequiturs into the vast void of human-centric cyberspace. Are you perhaps a bit stressed since the recent unpleasantness? May I suggest a stress pill and a nice nap?
If we also allow them to generate their own OPs, after awhile they’ll be having conversations with each other and instead of the board dying off it will be electronically taxidermied.
I didn’t quite say “the same answers” but they should give you largely similar answers as this is a pretty straightforward question that doesn’t require reasoning or anything. The AIs now tend to be pretty reasonable at this and the error rate is much lower than it was even a year ago. As I’ve said in other threads, you can even ask Chat-GPT for citations or links if you want to go to a source. That it’s pretty good at now, even at the free level.
It’s a simple question with text output. The paid versions will produce a 30 minute cooking show with recipes incorporating all gajillion cuts of meat.
Well, my particular question in Cafe Society was the situation that arose and prompted my question. But I am interested in the reasoning behind a “no AI at all under any circumstances” policy - not whether someone can look up what to do with chuck steak.
I guess it comes down to whether the analogy I postulated - a poster who is a professional researcher and lives next to the Library of Congress v. someone who runs to the drugstore and buys a paperback - has any validity with respect to AI. Perhaps it has none.
LLM-generated content is unreliable bullshit and I would prefer if it were banned entirely from discussions here. However, I understand that I am fighting a rising tide with my fists.
I would be willing to tolerate a solution like this if the AI-generated content were also obfuscated under one of those “hide” sliders, using the (details)(/details) tag, so people who find that stuff acceptable can easily access it if they want but it’s otherwise ignorable.
It’s not as though questions routinely go unanswered here by actual, flesh-and-blood human people with expertise or at least interest in the subject matter. I think that’s an important factor to consider here: what useful purpose does it serve to just C&P something out of an AI chatbot?
There is generally no awkward, answerless silence to be filled; there is no shortage of expertise; there is no general lack of care or concern on this board; there is no requirement for padding a thread, especially a factual one, with a load of text nobody wrote.
IMO, there’s a lie being sold that this is something you definitely need to somehow find space for in your life, by the people who want you to buy it.
Edit: and I think we’ve all to some extent accepted that lie - I mean, look at us using all of these words to try to justify what could just be ‘no thank you’.
Hi! Thanks for “@ing” me in this. The new toy I got is a newly released Chinese AI assistant. I’m using the free version because, let’s be clear, I’m a miser, a skinflint. I’m cheap. \(^o^)/
I figured your question would be a good one to test the thing. And I’m glad you liked the answer. I did learn something myself about it. I was a little worried about the answer before the asst. gave it to me. I discovered years ago, cuts of beef in Asia are different than those in the Americas and Europe, so it was chance on which continent it woudl go for.
I didn’t mind getting the note. It seems fair. Honestly, I usually have to spend extra time searching for stuff on the Internet compared to other people and quite often ask for someone on this site with better “google~fu” than I have to find the information. If someone were to respond to me with that animated “let me google that for you”, I really wouldn’t mind. I’d even laught at it while it’s giving me my answer. I can see, though, how someone might mind it if they didn’t ask for such help.
Maybe AI assistants should be seen as “googling on steroids” and then fall into whatever rules the board already has or will have in the future on googling.