Yeah, just two months ago it struggled to spit out anything resembling a sonnet, and wanted to write everything in tetrameter. Now it’s got the form pretty much down. Damn thing is iambic pentameter, and follows the form right, with a final thought at the closing couplet.
So I decided to see if ChatGPT could be a journalist. I started with a Reuters report:
I then gave ChatGPT this prompt:
This news report just came from Reuters. Could you write a news story about it as if you were a journalist in a paper? Add in background details and anything else that would flesh this out into a news story?
And here’s the result. Bear in mind that ChatGPT’s training ended in 2021, so it is limited in what it knows about other recent events.
On the subject of copying: When I read a piece of text, I’m not making a copy of it in the same sense that a computer hard drive is making a copy, nor in the same sense that a photocopier is. I’m imprinting a complicated pattern on the neural net of my brain, and nobody, by dissecting my brain, could recover the piece of text.
And yet… Sometimes, a particular bit of text really takes hold. Sometimes, somehow, the pattern it imprints in my neural net is so strong that I’m later able, from that pattern, to reconstruct the input text exactly. And this might happen either accidentally or deliberately: Someone might ask my opinion on some subject, and I might reply verbatim with something someone else wrote. Sometimes, the amount reproduced can even reach the point to be legally considered copyright infringement.
Surely, the same thing can happen with an AI: Somewhere in the vast corpus of its training data, some piece of text resonates so strongly, whatever “resonates” means for its computerized mind, that the pattern it imprints on its neural net contains all of the information of the original, such that it could be replicated verbatim later. And just as a human, in such a case, could be deemed to have violated copyright, surely so too could an AI.
Exactly. The copyright infringement comes from the closeness of the output work, not in how the AI learned the material. There is no ‘hard drive copying’ going on.
I asked ChatGPT to give me some deep thoughts in the style of Jack Handey. It gave me some original ones, then it gave me, “The face of a child says it all. Especially the mouth part of the face.” That is a direct ripoff of Handey. I confronted ChatGPT, and its explanation was that in short responses it can sometimes inadvertently repeat something it learned verbatim under some conditions. Just like humans.
I would consider that a copyright violation, but the other original ones I would not, even though the learning process was the same. Just like it’s not a copyright violation for a human to intensively study the style of a painter and have their own paintings influenced by it, but it would be if they studied it and then used their knowledge to paint a copy of one of them and sell it.
I agree with your analysis of copyright, based on my limited understanding. I’m impressed with your knowledge of AI. However, I do think artists who depend on a specific style for their livelihood have cause to be concerned. Accordingly, it is not obvious that copyright law will continue he in its current form. It will have to address controversies almost sure to arise, and copyright law might change a lot.
Journalists who merely reword news feeds do not apply local knowledge and trenchant analysis. The best journalists do. But they still have cause for concern, their profession has already been undermined by social media, profit priorities, intellectual devaluation, issues of sensationalism and trust and competition many might consider “good enough”. As a society, there are many smart people exposing the benefits of AI, sometimes with rose tints or things of marginal use. The costs are often put in existential terms, which also seems overwrought. Real, lesser costs may not be currently clear. There are almost certainly unknown unknowns. Even AI doesn’t know from knowledge.
Yes, artificial intelligence can & will replace many journalists (as well as many other professions).
This is not based on my expectation of how intelligent AI can become (though its relentless progress is often impressive). Rather, in my lived experience, I’ve seen that if an employer has an opportunity to cut wages by 90%, they will not hesitate to cut their quality standards by 90%. (Offshoring is an instructive example here).
You think AI writes garbage articles? You’re absolutely correct. Guess what next year’s journalism standard will be? AI-generated garbage. Most people are unsophisticated media consumers and will accept whatever is on offer. The people who think themselves the most eagle-eyed media skeptics are in fact just nitpickers looking to reasons to disbelieve news that doesn’t confirm their worldview. These people are the easiest to fool; AI will have no trouble replacing Fox News.
You think this will improve the quality of journalism overall? Not a chance. Who would compete with a machine that works for a nickel an hour? Nobody’s going to put in the extra mile for that, not unless they want to be a journalist-celebrity, or have an agenda to push.
AI is going to make everything cheaper. It’s also going to make everything worse.
The majority of content on the internet today is negative quality–you legitimately get dumber for having read it. It’s hard to do worse than that (except intentionally, which isn’t a new risk).
Since it’s mostly human-rewritten garbage anyway, an AI that meets the minimum standard of producing grammatical content that accurately summarizes the source material already has a leg up.
There’s a lot of bad journalism out there, but I’m not sure if I’d agree that most of it is negative.
But for argument’s sake, if we assume that’s true, the sheer volume of garbage will increase to a point where it habituates everyone to expect either garbage, or something that caters to exactly what they want to believe. “Straight” journalists, already hunted to extinction, won’t be able to compete.
In other words, it will accelerate all the current bad trends to the point where journalists won’t be motivated to do good work, and media consumers won’t be motivated to care.
To be clear, I’m including things like gaming, tech, science, etc. journalism here. Not just NYT and WaPo. A huge fraction is just regurgitated press releases and poorly summarized papers.
It could be that the end result is that everyone gets their own custom content producer, tailored just for them. That could obviously produce content bubbles, but frankly I think it’s hard to do worse than what we have now. The current situation divides people into one of a small number (two, more or less) of worldviews. It causes people with some nuance to their views to only be exposed to one side, eventually radicalizing them.
This is not correct. It could be programmed into AI quite trivially. While generally when using AI (ML really) we do not put many exclusions on the direction and result; however, we do from time to time incorporate a priori information to make the search more efficient. It would be trivial to include “Oh, and you’re not allowed to violate these rules.” In fact, you could do so without being explicit by giving any violation of those rules infinite cost.
Saying THAT is trivial. Now design a reward function that gives “violating the three laws” an infinite cost, using only the measurable inputs that your robot has, and which you can guarantee will not run into an alignment problem
If we’re assuming some kind of strong AI, that is on par with Asimovian levels, then it is capable of advanced planning and assessing the effect of an action or inaction to a considerable degree. Certainly more than the best AIs in existence today. The point is that right now, today, we encode similar exclusions into AIs. In a future, where we can encode an AI capable of the level of understanding and planning that exists in “I, Robot”, then we could encode more complex exclusions (e.g., likely just using natural language). To say that, “it just doesn’t work that way” is incorrect. It, in fact, exactly works that way although in a more complex form.
Of course, the entire point of “I, Robot” was to highlight the kind of oddities that occur from the three laws (the with the robot going in circles was great), and the development of a zeroth law as a natural outcome of the three laws.
The issue isn’t how sophisticated or advanced a positronic brain is compared to a convolutional neural network of 2023. The issue is that the Positronic Brain was apparently “programmed”, as a core principal, never to violate these three principles, phrased in English, based on a basically human understanding of these concept. Hell, the Zeroth Law comes about because the robots philosophize and realize that protecting Humanity is a greater task than protecting individual Humans, but implied in the law that requires protecting individuals.
That’s not how convolutional neural networks work. If we want an AI that identifies cats, we don’t sit down and say “Rule 1, don’t kill humans; rule 2, follow directions; rule 3, identify cats”. Rather, we would send out into the world (or into a simulated world) a host of AIs, measure their behavior, pick the ones that did the best, make slight adjustments, and re-run the test. We’d do this millions of times, allowing evolution to do its thing. And when we have an AI that identifies cats without killing people, we would ship that product.
But a clever enough AI may figure out that it’s in a simulation where you are testing its obedience and murderous potential; it therefore complies, following all three laws. Then, when you let it out into the world - Skynet.
Well, again, you are incorrect. Yes, you are describing the basic functionality of a neural network but you’re ignoring that in practice it does not have to be used so simply. For example, suppose I have a neural network with three outputs, and for simplicity I’ll assume Boolean outputs (0 or 1). I can very easily code that any topology that gives an output of (0,0,0) for any set of inputs is forbidden (negative infinite value).
There’s a lot out there that could not possibly get worse. A lot of this is already automatically generated (though not necessarily by AI, just by content farms that mix and match from other sources). This is partly because a lot of it is generated for machines by machines, just throwing up advertisement-larded pages that will come up on a Google search.
So I wasn’t really speaking to the fact of what’s already at rock bottom, but the fact that we’ll face the disappearance of higher-quality outlets that we rely upon to cut through the bullshit. The NYT editorial desk already produces a lot of articles that could have been AI-generated. I expect over time that conversion will quietly progress, and we’ll come to expect the lower quality product (even if we don’t necessarily like it).
Standards will get worse. There’s still scope for them to get much worse.
The question isn’t the number of variables, either.
Your AI has some number of sensors. Cameras for eyes, mics for ears, sensors for atmospheric pressure, radiation, etc etc.
What combination of inputs (which again are based on things that your AI can measure across all of its sensors) do you propose we use to tell the AI, “when Sensor A shows X, sensor B shows Y, Sensor C shows Z… you’re in violation of the Three Laws and therefore must stop”?
And how can you be sure that when selecting for that combination of values that your AI truly follows the Three Laws, and doesn’t simply appear to be, maliciously or otherwise?
I believe the poor state of current journalism is a great impetus for the development of AI journalists. Not only does AI have the potential to research stories in more depth and much quicker than humans, but it also has the potential to write higher-quality articles and meet deadlines faster than humans. It’s also safer, and probably cheaper to send AI journalist robots into risky situations.
I’m guessing it’s much cheaper for news organizations to hire hack writers rather than high-quality journalists like those from the 1930s-50s big-city newspapers. Still, I believe the price differential will be much smaller between hiring a hack AI writer and a state-of-the-art AI journalist. And the expense accounts will be smaller for AI field journalists, over human journalists, too.
The only question is whether there will still be a demand and market for high-quality news. With younger generations spoon-fed trash news, and no experience with the high-quality news of yesteryear, will they even want the good stuff?