Request: don't put ChatGPT or other AI-generated content in non-AI threads

I feel like this is an ‘asked and answered’ situation.

Chronos also posted in that thread specifically instructing you and Sam to knock it off with the chatbots, so answered twice.

Seems arbitrary. The chatbots provided the correct answer (it was my fault I posted the wrong answer, which I admitted to).

Why can’t they help us with math? If I say 2+2=4 that is ok but if I say I asked a chatbot what 2+2 equals and it answers “4” that is not ok?

Chronos had the right of it in that thread. Anybody can go to ChatGPT and generate as much bullshit as they want any time they want. These forums aren’t a repository for copy/pasted material from a robot.

If you’re unable or unwilling to do and verify the math yourself, just let somebody who is qualified do it instead. What value are you adding to the thread by copy/pasting something that may or may not be right?

^^^ This all day.

Shouldn’t even be controversial.

  1. The SDMB is a casual message board. It is for the public to participate in. It has high standards but it is not a physics forum or used as a place to publish formal research. Your way simply gatekeeps people from participating in certain threads. Only those with explicit expertise can answer.

  2. I seriously doubt Chronos or any other poster here wants to be used as a calculator when anyone needs some math done.

  3. The chatbots “bullshit” was actually correct.

  4. It is trivial to sidestep this “rule.” Just say you did the math yourself. If you get it wrong then you made a mistake…happens. I think it would be better to let people be honest about it.

  5. If you use a calculator to do math aren’t you copy/pasting results from a robot? You did not use a pen and paper…you used a machine to do your math.

No, it didn’t. Neither the first prompt you posted, nor the supposedly corrected second prompt provided the correct answer, and furthermore by the ‘reasoning’ provided by your chatbot indicated that it didn’t actually comprehend anything and was just producing grammatically and syntactically correct gibberish. Even the GPT-4 response that @Sam_Stone later provided which at least got the basic logic of the problem correct completely flubbed the basic procedure of rounting, indicating that it still didn’t ‘understand’ what it was doing and was just following a stochastic process of answering this question in a form similar to the text it was trained on.

The post you responded to was in Factual Questions. If you don’t want or aren’t able to do basic fact checking on your own response sufficient that you have confidence that the answer is factually correct, why they fuck are you responding? What value do you think you are bringing by posting a blurb generated by a chatbot that you don’t even have the knowledge to fact-check? And this is the fundamental problem with using such completely unvalidated AI, is it produces a confidently wrong result, you cut & paste the result in as ‘your’ answer (in essence, plagiarizing the work from a source that nobody can even check), and the fact that the ‘logic’ the chatbot used to provide the result is nonsense isn’t even apparent so then other people start propagating the error without recognizing that it comes from an untrustworthy agent. It’s a completely pointless exercise in anti-thought, a diminishment of critical thinking, a reflexive acceptance in just letting a ‘bot do the ‘hard work’ of ‘understanding’ the problem that you don’t want to crack a book or crank through a simple one line formula to work through.

Stranger

If “please don’t post shit you don’t understand” is gatekeeping, then I guess I’m pro-gatekeeping.

I seriously doubt Chronos or any other poster with advanced mathematical training wants to spend their time breaking down why ChatGPT keeps getting math problems incorrect.

Stranger addressed this in the post above this one.

Sure, if one’s compulsion to ‘participate’ in a thread is so overwhelming that they feel the need to pretend they know how to math. But, and this is weird, people usually want to talk about things that get posted. Pretending you can do the math only works right up until somebody asks you to account for poor accounting.

Yes, of course. That’s why certain types of calculators aren’t allowed in tests about certain types of processes. There’s a gulf between using a calculator to help you do something you already know and using one to spit out information that you don’t understand.

There have been multiple occasions where I’ve seen a question here about advanced math or science and have gone to ChatGPT to see what it has to say. It’s interesting, and usually a good start to get me hunting down more corroborating information on the internet for my own edification. But I don’t post it here, because I have no way of knowing how accurate or inaccurate it is, and posting its information adds exactly nothing to the thread.

One of the lovely things about this place is knowing that eventually somebody with expertise will be along, and there are enough topics that I’m incorrectly confident about to go flapping my gums about the ones I know I’m useless on.

A quick check shows you posting in FQ about Hyperloop, textbooks, “lizard brain,” billionaire income tax, GPS satellites, cloverleaves for highways (and nuclear targets), measuring level over long distances, what to do with 70,000 tons of iridium, re-entry of spacecraft using GPS, insurance companies trying to find someone, atoms in your body, life insurance and how long you do CPR.

That was just this month.

Are you an expert in all those things?

I know enough about each of those topics to have at least a marginally informed opinion instead of entering a prompt into Bing and cut & pasting the output into a response on a message board. I also know enough to look at the output you got from Bing and immediately recognize it as nonsense even without doing the calculation.

Stranger

This is getting kinda personal, and it’s a holiday weekend. So I’m temporarily closing this thread until an FQ mod can take a look.

As @puzzlegal said, this is getting too personal. If you want to take a pot shot at someone for what or how they post, you know where the Pit is. In ATMB, you are expected to treat others with respect, regardless of whether or not you agree wit them. Focus on the actual issues being discussed for moderation, not on specific posts or posters.

This thread is re-opened. If folks get snippy again, it will be permanently closed.

In FQ, whether or not a chatbot happens to get it’s “bullshit” correct is irrelevant. The output of a chatbot is not reliable, and therefore any chatbot response in FQ is basically the equivalent of “here is something I heard from someone and I have no idea if it’s true or not and I don’t remember where I heard it so there’s no way to verify it from my source”. In most cases, the output of a chatbot is not going to be an appropriate response in FQ.

On the other hand, to use the example upthread, if you want to know the time dilation for a particular trip to Alpha Centauri or whatever, if you clearly indicate that something is the output of a chatbot and you have no idea if it’s correct or not and you want people who actually do understand it to tell you if the chatbot was correct, and if not then what is the correct answer, that seems like a perfectly fine question for FQ. In this case, you’re not posting the answer as a factual response, you’re posting it as something you heard and want to know what the actual facts are.

So there are cases where a chatbot might steer the discussion towards a factual answer. But don’t ever post a chatbot’s response as a factual answer in FQ, because it’s not.

Also relevant to the FQ part of this discussion is this rule.

From the FQ FAQ:

If I had never mentioned where I got that math answer this message board would have chalked it up to a poster making a mistake (and, indeed, it was me who made the mistake). I was actually trying to be diligent in getting a correct answer.

Because I used a chatbot it is a sin?

Does that seem the best way this should work? I did not ask it to write my post. I posited the question and I sought the answer and I wrote the response. Would it be different if I asked my friend in the room to do the math for me?

I’ll cop to posting responses about subjects I am not expert on. Sometimes I read the OP knowing something and the question interested me enough that I did some quick research to expand and check my understanding. If that quick search finds something that I think might be of interest to others who also knew a little then I might share what I learned. With the citation(s) that I feel are reliable sources, or if not reliable share what my doubts are. I have lots of doubts. And in the thread we can discuss those doubts. Should we believe that answer? Why or why not?

A bit different than ChatGPTing it for you. I like think that one adds some value to a fun discussion among intellectually curious people. I am of the opinion the other does not, even when the answer provided is factually correct.

Honestly the era of generative AI makes this a more vital process. How do we know what to believe anymore? Large segments of the public think historical atrocities are myths and misinformation presenting convincingly abounds. That shared critical evaluation process is more important now than perhaps ever.

IMHO.

A thread for you…

I disagree with the thread title. Stochastic parrots are here to stay, and they will only get cleverer and more expressive. But we do need to slot them into their proper place, so props to the OP for starting the thread.

Let’s compare AI generated content with Wikipedia, another source that isn’t 100% reliable. I say that when reasonable doubt collides with Wikipedia, reasonable doubt wins. But in my experience, Wikipedia’s accuracy is astonishingly high given the sourcing, but still falls short of being reliable for cases where errors have consequences. It provides a great starting point though, since you can trace the citations using varying amounts of effort.

ChatGPT is far less reliable than Wikipedia, the AP wires, or any actual citation. It’s not a cite insofar as it’s really not helpful evidence (even if accurate 1 time out of 12), and it shouldn’t be treated as a cite.

I’ve used AI on this message board a couple of times. Here I use AI to generate a list of possibly existing products, with the hopes that some of the names might jog people’s memories:

This commercial website assures me there are lots of free tools online… ChatGBT mentions MyRegistry, Giftster, Wishlistr, Wishpot, and Wishlistify. Maybe some of them exist. Does anyone have recent experience with this? I couldn’t find reviews from Wirecutter or PC Magazine. Is there a non-Amazon wishlist site out there somewhere? - #6 by Measure_for_Measure

More recently I posed a question and discussed my unsuccessful attempts to answer it (I eventually found a decent cite though):

In both cases I reported the results without using direct quotations of the AI results, which tend to mislead. That approach might be something to consider.

Or not. Criticisms of my judgment in those threads are welcome.

Oftentimes, you can find an online calculator for interesting physics questions that other people have probably also wondered about. This one https://spacetravel.simhub.online/ answers the time dilation to Alpha Centauri question (and many other related questions), without engaging a system designed to sound good but not to be right.