Maybe something like this?
I’d amend it as follows: “If it’s a sentence or less, it can be in “” quotes rather than a quotebox, as long as it’s clearly indicated that it comes from another source.”
Your insults notwithstanding, I come here to read things written by people and have no interest in reading somebody’s interactions with chatgpt outside of threads explicitly dedicated to that purpose. I do think a proliferation of that kind of content is ultimately harmful to any online space and it’s pretty irritating to see discussions of how or whether to approach the issue met with denigrating variations of “luddites lol.”
Copy your text and highlight it but in the symbols above your typing box, instead of clicking the I symbol click the " symbol. It’s the same process but pressing a different button.
ETAOIN SHRDLU’
Okay, that worked, thanks. I will try to get used to it, but I dont think it is really a big deal.
The thing is typing a > right before a paste job is really easy. It’s one character. It’s faster than italic code or quotation marks. It should be the standard for quoting things not just because it’s harder to miss for the reader, it’s just the simplest way for the poster.
But it doesn’t work for multi-paragraph inclusions. You have to preface the subsequent paras with the same “>” or just include it all in a single [quote]/[/quote] block.
That sucks. There are other sites that handle that better, knowing that if you pasted in something after a > you probably meant for the whole thing to be quoted.
It doesn’t work even if you click the button with the markup editor. But if you are the other mode, using the button first does put it all in the quote box.
Nor do any of the other methods I mentioned.
But again, it’s only one character at the beginning of each block of text, so still simpler and faster than the others.
I don’t get the animosity, as long as they attribute it correctly.
Being just irritated that they used AI IS Luddite territory. There were probably people who bitched much the same way about cars on roads with horses, televisions vs. radios, and pretty much anything disruptive.
This is far more resistance to change than anything logical or sensible.
As long as AI is taken with a large grain of salt. More than Wiki for sure. By no means is AI evil- it is just another tool- which isnt always reliable.
It would seem to be unenforcible. Text is text.
Best we could do is a ‘best practice’ guideline, I’d think?
Not so hard.
Be even better inside a details thing:
ChatGPT says ...
Sample AI output goes here
Nice, already suggested, but as a pain to do on a phone, I don’t see us requiring it.
But all quotes, wiki, AI, news articles etc. Should be in quote tags and an indication of what is being quoted.
On the OP question, of course sources must be attributed and direct quotes clearly marked as such, there’s no argument there.
But I would point out that this is a board mostly for engaging in fact-based discussions. This is not a poetry or literature competition where one might be accused of intellectual or creative plagiarism. And in the context of fact-based discussion, an AI like ChatGPT is simply an aggregator and interpreter of information that was all originally written by humans.
Argue if you want that a particular piece of AI-provided information is wrong, incomplete, or misleading. By all means insist that it must be appropriately attributed and properly quoted. And I support the hope that quoting AI all the time doesn’t get out of hand, and that posts containing information from an AI should generally be augmented by the poster’s own observations as well.
But this sort of blanket dismissal, refusing to look at any information just because it was sourced by an AI, is antiquated and short-sighted.
I’ve seen an ‘’‘AI’‘’ that was trained by it’s user to become a young Earth Creationist that sorted all people groups by their genetic relationship to Noah and his three sons.
An “AI” could tell me the sky was blue and I’d feel a serious need to check.
That’s what I’m saying too! Nothing wrong with it, as long as it’s clearly attributed and we have an idea what it was asked to provide.
Ultimately it doesn’t seem much different to me than if someone quotes Steve Jobs on cancer treatment, or Trump on economic policy. It’s likely bullshit, and we should consider the source when we read it, but there’s no reason someone shouldn’t be allowed to quote it in earnest, so long as it’s properly attributed.
This is an important distinction.
This site has not historically moderated “wrong”. (With few exceptions, which are very specifically topical and categorized as “tired topics.”)
The critical point is that we the readers need to be able to consider the source, which means proper attribution. That includes AI-generated answers.
Proper quoting is important in separating assertion from poster, since the poster is citing an external authority, and not quoting clearly and transparently tends to obfuscate correct attribution.
So clearly quote and attribute all cited external content. There’s no need for any distinction among types of external content, as long as there’s good clarity of what’s being externally cited and where it came from.
Right. But in considering the source- which is important (I made that clear when I posted a quote from the NY Post, for instance), if your counter argument is that the cite is wrong (or just biased) - coming up with a counter cite is the proper thing to do.
If my cite isn’t going to be accepted as reliable shouldn’t it be up to me to find one that is?
As with Google dumps and Wikipedia, LLMs are, perhaps, reasonable places to start looking for a/an cite/cites but, as of now, they shouldn’t be accepted, or used, as a cite on their own.