If that assistant were not under a contractual work-for-hire (which they probably are, anyway) and there was skill involved in timing the shutter release just right, I would argue the assistant has contributed meaningfully to the creation of the image and would at least be a co-contributor. For just pressing the button on the photographer’s say-so, not so much. I do know a good friend of mine who set up a remote inside an FA-18 flying over Iraq in 2003, and the credit to the photo reads: “His Name & Pilot’s Name/NY Times” for this reason. My friend gave the pilot very specific instructions on when to take the photo, which way to face, where the sun should be, but the pilot clicked the remote trigger. As you know, with legal issues, it all comes down to the details. I (a photographer) was simply taught, whoever pushes the button gets the credit and copyright, but it’s more nuanced than that (as it should be.) Some A-list studio photographers do exactly what you say: just set up the photo and have an assistant push the button. Then they approve or not the photo. But they also have work-for-hire contracts for them where all this is spelled out in terms of who gets the copyright.
If privacy is ignored by sinopholic countries they will dominate AI. This is not inevitable, nothing is. But one hopes before this, sensible discussion has occurred on many big topics, not one or two in isolation: ethics, health and well-being of digital users, algorithmic disruption of work and labour, economic competitiveness, national and international security, privacy and competition law, digital rights. patent and digital copyright, free speech, trade regulation, oversight, controlling personal data and the legality of surveillance. Good luck with that.
As someone who works for government, my prediction is that reforms will occur in a reactionary way that overreacts. Such as restricting it too much, but not until after it does a lot of damage.
Now, fixing the consequences of those overreactions will probably be done in a laggardly and haphazard way, as will fixing any loopholes that people exploit to cause problems.
People are already doing this in a primitive way:
Basically, even though ChatGPT’s one-off results aren’t necessarily very good, you can feed the results back to it, asking it to find common errors and tell it if it made a mistake (in this case, actually running the SQL query against the DB).
This kind of looped operation could be very powerful, both for positive and negative uses. And it shouldn’t be surprising that it works well. This kind of iteration from a first draft to a final version is how humans operate, too.
I thought thi recent Nature News article might help put some context to the discussion. The subject is science, not journalism, but the discussion points still apply, I think.
What ChatGPT and generative AI mean for science (nature.com)
Some quotes from the article:
“I use LLMs every day now,” says Hafsteinn Einarsson, a computer scientist at the University of Iceland in Reykjavik. He started with GPT-3, but has since switched to ChatGPT, which helps him to write presentation slides, student exams and coursework problems, and to convert student theses into papers. “Many people are using it as a digital secretary or assistant,” he says.
But researchers emphasize that LLMs are fundamentally unreliable at answering questions, sometimes generating false responses. “We need to be wary when we use these systems to produce knowledge,” says Osmanovic Thunström.
It doesn’t look like this latest wave of tools has made any earthshaking advances in known areas of AI issues, such as bias, brittleness, and training (the real elephant in the room IMHO).
The more I have played around with chat, the less impressed I am. It is impressive technology, and like much impressive technology is not without hype and hubris. Of course, my limited testing of it does not matter at all. Will it really revolutionize search, or keep people on the websites of the Internet giants? The answer? Not yet.
A cryptic cruciverbalist’s experience with chatGPT. His job is not under much threat as yet!