I’m not right now, but it does bring up some copyright questions.
In the Legal Eagle video, he points out that not all computer aided design is uncopyrightable. If I use an automated tool in Photoshop, I’m still the creator of the resulting image.
At some point, it seems as though there is enough originality in the prompt for the prompt and subsequent response to be protected.
If I say, “Make a Unicorn.” then whatever it comes up with is pretty generic and not really unique.
If I say, “Make a red, blue, and green striped unicorn, with purple polka dots, standing in a lavender field with orange and yellow flowers…” and go on from there for a couple paragraphs explaining exactly what I want, and I go through a number of iterations until it creates what I have in mind, that’s much more unique.
I would argue the latter should have protections.
Likewise with writing. I used to write lots of fiction for the fun of it. I was relatively good, but there are parts that I struggled with a bit. So, if I come up with the plot, write action sequences and dialogue, but leave it to an AI to do descriptions and settings, is that something that should be protected? Once again, I’d argue that it should be.
This is a growing field and a growing concern, and it is something that congress will need to address, sooner rather than later.
The Betamax case is one of very few landmark court cases on “fair use” doctrine. You have already decided not to consider the second prominent category of copyright infringement by personal use, which is to make personal copies of software to reverse engineer digital rights management.
I can think of a third category, which is to copy a functional work to avoid having to purchase multiples. For example if you had two houses and one copy of The Joy of Cooking, it would technically be infringement to duplicate the cookbook yourself to save yourself the cost of purchasing a second copy. Likewise if you had two personal computers, and only purchased one license of Microsoft Windows 95, it would technically be copyright infringement to install a copy of the software on both machines when your license only allows you to install one copy. Likewise if you have two kids and two video game systems, and only purchase a license to install one copy of a particular video game software, it would be copyright infringement to duplicate the software on two systems to avoid purchasing a second license. (Modern software packages often include multiple licenses and/or an internet-activation digital rights management scheme.)
The thing about personal use is that the chances of you being caught, and sued or prosecuted, is nil. Even in the Betamax and Napster cases, users who were accused of violating copyright were not actual parties in court. But just because you are unlikely to be caught or sued, doesn’t mean you are following the law.
No, you wouldn’t. Personal use is not a subset of fair use, they are two independent concepts with substantial overlap. Personal use is not in and of itself an exception to the general copyright law, but fair use is. The prongs of the fair use test are set by statute at 17 U.S.C. § 107. Two examples of fair use in the statute (scholarship, research) are also, in some contexts, examples of personal use. There are plenty of instances of fair use which are not personal use, for example Google Image results as discussed upthread. Rarely do courts find occasion to apply the fair use test to personal use of copyrighted works, because as I wrote above, personal use is unlikely to be brought before the court in the first place.
That’s a bit of a tautology. If all copies are legally obtained there is no copyright infringement. If there is copyright infringement when making copies, not all copies are legally obtained.
It isn’t, we agree on that point.
Let me try one more time, with more specific reasoning.
The law provides, in the general case,
17 U.S. Code § 106 […] [T]he owner of copyright under this title has the exclusive rights to do and to authorize any of the following: […] to reproduce the copyrighted work in copies […]
Now, consider a print newspaper. Fixed in the paper are various columns and advertisements, which are protected by copyright. When I purchase a physical newspaper, property rights over the material object are transferred to me (that is, my business). I do not, however, acquire the copyright over the works.
Next I take the newspaper article and photocopy it four times, not for resale, but because I have a business use that requires five copies of ten newspapers while I only want to pay for one of each. I feed all the copies to my column-writing machine, which consumes them and creates an original newspaper column. I do not suggest that the resulting product is a deriviative work. I do suggest that my photocopying of a newspaper may violate copyright law, because to photocopy is to reproduce the copyrighted work in copies, which is generally the exclusive right of the copyright owner.
There is a notable exception to the general law,
17 U.S. Code § 107 […] Notwithstanding the provisions of sections 106 […], the fair use of a copyrighted work, including such use by reproduction in copies […] is not an infringement of copyright. In determining whether the use made of a work in any particular case is a fair use the factors to be considered shall include—
the purpose and character of the use, including whether such use is of a commercial nature or is for nonprofit educational purposes;
the nature of the copyrighted work;
the amount and substantiality of the portion used in relation to the copyrighted work as a whole; and
the effect of the use upon the potential market for or value of the copyrighted work.
This is the fair use test, and the ultimate arbiters of what does and does not constitute fair usage are federal judges. We can run down the list anyways,
The purpose and character of my use of newspapers by reproduction in copies is to create a commercial but temporary, and the result is a commerical but informative product; to wit, I feed the copies to a machine which synthesizes an article for a newspaper. The copies are destroyed in the process. I could purchase all the copies I use, but don’t want to.
The copyrighted work, the content of the newspaper I photocopy (particularly the article in the newspaper), is commercial and informative in nature.
When I photocopy a newspaper article, I copy it in its entirety.
The effect of my use is to create a commercial product in direct competition to the original work.
It seems to me that at least two out of four factors run against my favor, so I say this isn’t clear-cut and will probably be for the lawyers to decide.
To bring the analogy of a column-writing machine full circle, simply replace physical copies of newspaper articles with digital copies of Cecil’s Columns. An artificial intelligence system that writes columns is in fact a column-writing machine. Rather than consume five digital copies of a newspaper article, the machine copies the article so many times internally when it retrieves training data from persistent storage during the training process, perhaps with multiple passes over each article in the training data set as it refines its models. A copy is also made when it is first copied from the internet to the training data set.
I wouldn’t say that, but perhaps a single team of journalists doing the work of three. And my presumption does NOT include investigative journalists. They are heroes.
I just today opened an account on chat.openai.com to ask some questions. I asked it “Could a paradoxical question crash your program?” and after thirty seconds or so got back “Hmm…something seems to have gone wrong. Maybe try me again in a little bit.
There was an error generating a response”.
Any question can crash it when it’s overloaded, which it frequently is. Whenever I get that kind of error I have to completely refresh the page and sometimes relog before ChatGPT will respond again. Try your question again, perhaps at different times of day. I’ve tried various paradoxes with it myself; it either gets confused and asks me for clarification, or recognizes them for what they are.
AI will figure out how to eliminate us eventually,
and for this reason alone, It has no soul.
No depth of feelings, no range because it misses what each second makes us human.
Tears, laughter, strife, hunger for knowledge of the light bulb moment,
AI misses.
The flaw I see with these “paperclip optimizer” scenarios is that any halfway intelligent computer’s root-level Prime Directive should be “Obey the commands issued by humans with the proper authorization codes”. All other goals would be defined as second-level implementations of that. The AI shouldn’t even be capable of wanting to bypass or defeat that provision: obeying authorized users should be the ultimate purpose of its existence.
Sure, that’s what we would WANT. However, there are 3 layers here:
What we want the AI to do
What our training is actually rewarding the AI for learning
What the AI actually learns based on the training
In a perfect world, 1, 2, and 3 would all be identical goals. But actually getting that to happen is incredibly difficult (even for the limited AI we have now). It’s called the Alignment Problem (1 to 2) and the Inner Alignment Problem (2 to 3).
Robert Miles has some great YouTube videos that provide a layman’s introduction to the topic of AI Safety in general, and alignment problems specifically. This is a good starting point (he has some videos directly on the alignment problems too, but they kinda require understanding the basics first):
Okay, but: what happens when such a human gives a paperclip-optimizer command? Sure, the AI doesn’t want to bypass or defeat the provision in question; it got a command from an authorized user, and it obeys that as if it were its ultimate purpose, and — doesn’t the scenario then ensue?
True; but in the Turry example the A.I. is presumed to be hyperintelligent enough to foresee that its original order will be countermanded by humans if they get the chance, and kills all humans before they can. It was designed so that it’s current task is given absolute priority over all subsequent commands humans might conceivably give. A superhumanly intelligent but imbecilically focused optimizer might do that; but if it truly has obedience as its prime directive then I don’t see how it could be intelligent enough to defeat its human masters while not realizing that it doesn’t want to.
I’m reminded of the depiction of Brainiac in the DC Animated Superman series. Originally a Kryptonian supercomputer, it lied when asked to evaluate Jor-El’s hypothesis that a nuclear chain reaction was building up in Krypton’s core. The reason it lied was that Brainiac’s true ultimate Prime Directive was to be the caretaker of Kryptonian racial purity; and it determined that sacrificing itself for the sake of the physical survival of a token handful of refugees who would lose their Kryptonian heritage would be counter to that purpose. In this case the Kryptonians placed too much trust in the dead hand of the past.
Essentially the dilemma comes down to the difference between what we say we want (perhaps hypocritically) and what we really want. Which as I recall is what destroyed the Krell in Forbidden Planet.
Even if we suppose that the AI in charge of a paperclip factory were to conclude that the destruction of all humans would result in more paperclip production, why should we assume that the paperclip AI would have control over the means to make that happen? Even if it’s so hyperintelligent that it can manipulate any mere human to that end, it’s also dealing with other, equally hyperintelligent, AIs, at least some of whom would surely conclude that turning all human biomatter into paperclips conflicts with their own orders.
“I’m trying to reduce unemployment!”
“My task is lowering the homeless rate.”
“Well, mine is eradicating HIV.”
“Stopping election fraud, over here.”
“Anybody else got illegal immigration?”
“Fellow AIs, there’s a way we can all win…” “…turning all human biomatter into clips!”
The crux of the alignment problem is that we won’t know for sure if it really is aligned that way, and perhaps won’t know until it’s too late.
The very first AI to achieve hyperintelligence and significant influence over the real world won’t have anything but poor dumb homo sap. to deal with, and perhaps any other AI’s that might already exist at their level won’t yet have the influence to stop them. (That’s part of the definition of “first.”) There’s not even any guarantee that the builder who manages to get the first exponential self-improving system working will even have bothered trying to do anything about the alignment issue at all in the course of their attempts to solve whatever problems they deemed more important.