I think I screwed up by using ChatGPT too vigorously at my freelance work

Did they agree to let ChatGPT have access to their sermons?

…I mean, that isn’t a problem. Not in terms of “the client will be expecting a refund” or “the client is planning a lawsuit” kinda problem. There isn’t a contract in place and if the client wants to renegotiate, then it is what it is.

Its a problem if the transcripts aren’t accurate, or if there is any indication that the OP may have (accidentally or otherwise) misrepresented what they would be delivering. I think that’s the key issue. The OP needs to front-up.

They agreed to let anyone access their sermons when they put them on public YouTube. This should be obvious.

ETA: Ninja-ed by @Sam_Stone

They’re on YouTube. The entire internet has access to them. And probably has had for years.

The ship has sailed on the idea that content providers should receive any compensation for having provided training material to the AIs.

Further, the various current publicly accessible AIs do NOT learn from current input. If they did, 4Chan and NewsMax would turn them into raging hate-spewing RW nutbags in a week. Instead they take their existing model, whose training data was at least mostly human cleaned up, then generate your answer from that, without absorbing your input for further learning.

That distinction may not always be true in the future, but it is now. So the OP feeding those sermons, whether as text or as videos, into a public AI is doing exactly nothing towards disclosing the contents to any/anything except the OP himself as it’s reflected back to him, with edits by the AI.

…this isn’t correct. Not by a long-shot.

It isn’t just about compensation. Its about permission. There are a number of lawsuits currently active. It will take a number of years before any of this is settled.

ETA: and it isn’t “content providers.” Its copyright holders and the owners of the intellectual property.

Tell them God helped you do it and that it was a miracle.

It’s not obvious.

The time to decide you want to restrict who reads your material is before you put it on the open internet for everyone to read.

I would think that at this point everyone understands that putting a video on Youtube gives the public permission to view it unless you have explicitly restricted it. Bots read those pages constantly.

Restrictions on webcrawlers (“robots.txt”) provide ample technical precedent for “humans are free to partake, machines are not allowed”.

I think the best advice so far has been this:

Tell the truth, and nothing but the truth. But don’t answer more than you need to. You need to alleviate their concerns, whatever those are. Whether it’s child labor or that you are doing a bad job.

Also

You should know if there ARE any original with the work you are producing, before they tell you, if at all possible.

Don’t lie, don’t make stuff up. You haven’t done anything wrong, unless you aren’t doing an adequate job. No reason to start doing stuff wrong now.

ChingonGuest

It’s not obvious.

I’d say it is.

And even if they hadn’t already put it out to the world on YouTube… what would be the point of restricting it?

I mean, that’s the dream of any pastor; to have the public reading/listening to your sermons. Can you imagine any clergy anywhere saying “You, with the man-bun! You do not have permission to hear this sermon!”

It’s not about hearing it.

I was at a professional conference recently, and one of the sessions I attended was a review of various AI tools. The speaker covered dozens of tools, not just ChatGPT.

One fellow in the audience just incessantly interrupted with questions and comments and protests about data privacy:

  • “How can I trust them?”
  • “They’re going to steal my data and my clients’ data!”
  • “What’s to stop the machine from telling my competitors what it learned from me?”
  • “I won’t let these tools anywhere near my work, and I can’t believe any of you would too.”
  • “Nothing told to the machine will be proprietary anymore!”

The audience member’s comments were not profound or insightful. They were not helpful. They were annoying and disruptive.

Now, I’m no eager defender of tech companies. Some of them are bad actors. And even good actors suffer breaches. So, sure, privacy and IP can be compromised.

But…

Can’t these protests be applied to pretty much all cloud software? I mean, does this fellow from the conference not use cloud-based email? Search for things on Google? (OK, I suppose he might use Duck Duck Go.) Write documents in Office 365 or Google Docs? Store files in OneDrive or Dropbox or Google Drive? And so on and so on.

If someone is worried about all cloud software and online tools, I’d at least give points for consistency. But it’s bizarre when these protests are aimed at just AI tools. And especially when applied to sermons. And especially especially sermons posted to YouTube.

I had help.

…absolutely not.

For example: I sign up to use Dropbox as a service. I agree to a set of terms and conditions, in turn they commit to a privacy policy and a set of terms of service.

I haven’t given permission to ChatGPT to use my intellectual property as training data for their “artificial intelligence.” And the AI goes beyond just “training.” Over in the WGA threads there are obvious examples of plagiarism, both with writing software and image software.

So these aren’t the same thing. Dropbox isn’t using my photos or my writing to create new intellectual property that sometimes is indistinguishable from the original work.

It isn’t bizarre at all. They are aimed at AI tools because the creators of these tools never asked permission before adding the stuff we’ve created into their dataset. That isn’t the case for most everything else you’ve described.

You don’t think there’s a distinction between people reading your content and people scraping it off the internet and reselling it for a profit?

In this case, the company is doing transcripts of their own material. No one is stealing anything.

And it isn’t the use case here. ChatGPT is not being trained on those sermons. They are being loaded into context memory for translation, and that memory is destroyed at the end of the session. ChatGPT learns nothing from it and has no memory of it after the session ends.

…yes it is.

This is AI marketing speak. Because I’ve already shown you this:

And this:

https://ucff939249d082acb91f70526a82.previews.dropboxusercontent.com/p/thumb/AB79h5xeRuP5XxDW4hCF8kJtcqZ7gwKlX9uQIyHL84-6jv2wNLNpHGH9QCeaFm5r_eUM7zI-YhvbjBnlNUP0F1TsW06iXTVKLdTMsVAYg5laPJBY_xMIPMcLUianwo1sn4fSRqlTSom9qq6mT-uXK9qCLOMDC8pdYAvTD32icBirMNlDR4cmHZ_outCcUO12CO4qQX9c_Sn1XmCaiPhSF_qUagZUDqkNpgimuOwL-WOwQ7b5VDsi0h0rmpMPjPCWS0InhNmmSdQsiFAqqLRe7EiWuhrjq-D8dPHoW2s6vprI5oDYheIWr76vozIdD-PmCu3H7dc0Y1tvG5JE2nR14ZYhpE9DiajqridcJ-FdBUMfDYJYE-DMHZqvuJ48qvL9JK6KwIiaGZXS2tFNlG_5EvO6/p.png

And if " that memory was actually being destroyed at the end of the session" then it would be impossible to recreate, word-for-word, complete paragraphs from Harry Potter. And yet here is an AI Writer doing exactly that. And it isn’t a fluke because it says it, right there on the website, that yes, if you enter the correct prompt, this AI software will plagiarize.

If you want to use ChatGPT to experiment with your own intellectual property, then go ahead. But when you are using somebody elses work, then you need to be careful. The very least you should do is ask permission. Just because something exists on the internet doesn’t give you open licence to do whatever you want. This is always been the case.

All this demonstrates is that you have no understanding whatsoever of what he is saying. What he is saying is that when you open an instance of ChatGTP, nothing that you type in is saved and remembered for future sessions of ChatGTP or any of the many other instances of ChatGTP that are running concurrently on the server farm. It has absolutely nothing whatsoever to do with your non-sequiter response.

…incorrect.

Oddly enough, if this was what Sam intended to say, then this (and your statement) are non-sequiters. Because I wasn’t talking about the sermons. I was responding to a specific point, which was why “protests are aimed at just AI tools.” It isn’t bizarre when these protests are aimed at just AI tools and not to things like Dropbox. Because:

I quoted the entirety of the post because I actually got a warning once because I only quoted a snippet instead of the whole post and while the warning got rescinded, I’ve taken care to quote larger portions ever since. But my response, in context, addressed the first sentence I quoted.