Is quoting ChatGPT a copyright violation?

There are occasionally posts that quote ChatGPT at length. Is this a copyright violation that should be noted by our local moderators? Or maybe trimmed down to a “fair use” length?

Nope. I’m pretty sure that so far courts have ruled that generated content feom AIs cannot be copywritten.

That means you could have an AI create the best story or picture ever, and you can’t copyright it either.

FWIW, although I don’t know there should be a rule against it, I am decidedly not a fan of ChatGPT answers being brought up in contexts not specifically about AI, and would love it if people would do that less.

Yeah, I’ve seen that a couple of times here. For factual stuff, it definitely should not be relied upon, as Chat GPT is questionable when it comes to facts. For screwing around creative stuff (like “write a poem”), it’s okay, but a bit hackneyed. I think ChatGPT can be used as a decent springboard to explore factual answers, but needs to be vetted by a human and edited.

I’m an editor in several subjects on StackExchange, and there has been a huge dropoff in submitted content (at least in my areas) since ChatGPT came out. At first, some users tried to answer questions by submitting them to ChatGPT, but that was quickly banned. But the volume of regular questions and answers seems to have dropped anyway.

Can anyone prove that it was written by AI and not you ?

Nitpick/pet peeve: copyrighted.

However, some editors don’t approve of using the word as a verb, preferring a circumlocution like “protected by copyright.” I’m not quite that pedantic.

I just asked a friend who runs an AI lab if an image generator could recognize its own work, and he said no. Didn’t ask about text though.

There are AIs that are trained to recognize AI-generated text, but I think they’re distinct from the AIs that actually generate it. Probably a similar substrate, but they’re trained differently (and the training is the most important part).

I’m pretty sure that the problem of “detecting” the output of an AI – whether it’s text, images, or something else – is impossible in the general case. The only way the question is even meaningful is when it’s limited to detection of a specific AI. ChatGPT 3.5, for example, has a particular style that is recognizable in its more lengthy responses. But in the general case, there’s no such thing as a reliable “AI marker”. It’s analogous to how we can often identify a paragraph or two as the work of a particular author, but there’s no meaningful sense in which we can judge that something was – or was not – “written by a human”.

I know AIs are “trained” by exposing them to words and images. The early ones (i.e. so last year) were trained on words that were at least two years old.

Has any progress been made on real-time training so that at least answers can be kept up to date?

Only humans can hold a copyright. Naruto v. Slater.

That is the idea behind generative adversarial network learning:

that the generated outputs should not be distinguishable from the reference outputs.