AI is wonderful and will make your life better! (not)

Nice to know that Mr. Smith from The Matrix can still run for president and stand a good chance of beating Hillary Clinton or Kamala Harris.

(Actually, I think he’d be a better president than Trump.)

A Jihad of the Butlerian variety is called for, clearly…
(I see @peccavi beat me to it)

Why are you phrasing this with the future tense?

Have you been on Amazon , spotify or Youtube lately? AI content is already mushrooming and being actively pushed by the algorithms (whether that’s intentional by the algorithm creators or not, I wouldn’t know.).

I hadn’t noticed, no, but I’m not surprised.

Yup, it’s already here. My exposure has been mostly through LinkedIn, which is an absolute cesspool of AI crap. Even just two years ago, I could browse and read some useful things about my industry and job role. Now it’s just rage inducing.

Pinterest was the big loss for me - I liked browsing hobby-related content there. Now it’s majority AI slop.

If the creed of the Butlerian Jihad is “Thou shalt not make a machine in the likeness of a human mind”, then I think we should start with “Thou shall not use a machine to make an image in impersonation of reality”. IOW, fine if you use AI to produce cartoons, drawings and paintings; even fine to make special effects for works branded as fiction. But no photorealistic images outside of visual fiction.

I don’t know if this is an experiential thing or what, but it seems some people are amazingly easily hoodwinked by AI slop and some aren’t. It could be an age thing - I get the sense bluehairs are much more easily fooled.

A few months back I was watching a video by a literary agent about AI writing. She showed three AI-written flash fiction stories and challenged the viewed to guess which were AI. Then she showed that a shocking percentage of people didn’t know which was which. I mean, I’m no genius and I’m not Salman Rushdie or anything, but to me it was so, so obvious. Like, flashing red lights, sirens. (It went AI, human, AI.)

AI writing fucking sucks, and has the most blatant tells.

I mean, can AI do stuff? Absolutely, I use it for help with spreadsheets. It’s very helpful for front end coding.

This is a legit problem too.

Self published books on Amazon that are AI slop are legion. Granted, most sell 9 copies and vanish, but the sheer volume of AI garbage makes it hard to find the good stuff.

I’m not misrepresenting anything. I linked to the study. There are other studies, but you’ll cherry-pick those too. You refuse to accept the negative influence of this technology, even though it’s being used by the government to surveil citizens, kidnap heads of state, and kill people, it’s damaging the environment (and I’ve already provided you those cites), it’s devaluing artistic creation, and it’s threatening my livelihood. The fact that it results in lower brain activity is completely unsurprising if you know anything about neuroscience. This technology, like social media, is changing the way people think, and not in a good way.

You claim despite your feverish devotion to AI that you don’t use it that often. Okay, well you aren’t the typical use case. The typical use case is young people who use it every day, for everything. Those young people are the future of this country. Most of these studies are showing that the biggest impact is on young people, many of whom never learned to think critically in the first place. Many of us have decades of experience thinking critically. It is a harder skill for us to lose. Young people do not have this. They are more vulnerable. And I am not only talking about children.

My husband was recently speaking to a clinical psychologist about how much his graduate students’ cognitive skills have degraded over the last two years. They used to know how to source information properly. They can’t do it anymore. They can’t problem solve. But two years ago, they could. It’s like allowing a muscle to atrophy. And it’s happening all around us. And it’s being used in supremely evil ways.

If people want to use it to code or for technical tasks, I don’t really give a shit. But I utterly resent the widespread proliferation and the way it’s being forced on people without their consent, regardless of whether it’s actually helpful. For medicine, solving climate change, whatever - as a sharp instrument with a narrow purpose, guided and used by experts, it’s great. For the general public, and especially for authoritarian regimes, I think it’s one of the worst things to ever happen.

Yeah, that survey kind of buried the lede, didn’t it? Jesus Christ.

Nah, you’re missing the point of “art”. The problem isn’t only that AI can fool people into thinking it’s real, it’s that you’re losing the emotion and creativity that goes into creating something.

Maybe AI art doesn’t seem bad to you now, because it’s based on actual art created by humans (and then stolen from them). Give it five years when AIs are all training on art also created by AI, and any creativity is gone.

Dude, OLD people are relying on ChatGPT and Gemini to answer common questions, and we all know how good AI is at that (it fucking sucks.)

I’ve seen it, greybeards and mortuary bait asking their phones for rudimentary information.

So I write and read romance. Most romance that I read is not very good. But inevitably, these books will have rave reviews, 4.5 stars, etc. even though in my opinion they are hot garbage. This indicates to me that there’s a massive contingent of people who are looking for certain elements and they don’t care whether the story or writing are actually good. Yes, good is subjective, but a lot of what I come across is poor prose, poor characterization, poor dialogue, and less often, major structural issues. So these are the things that don’t matter, for some reason, to a lot of people who read romance.

I can’t figure it out.

But the state of the self-pub industry right now is you’re not making money unless you’re writing six books a year. If you’re writing a book in two months, unless you’re Stephen King, it’s probably not that good. And readers will consume these books in a single day at $2.99 a pop and clamour for more. It’s like eating fast food. Cheap, disposable, immediately gratifying, and then onto the next thing.

Now, I can say with certainty that books written by AI are much worse than even badly written books by humans.

The thing is, I’m not sure readers will notice the difference. A lot of them are already enjoying substandard writing.

My son occasionally stumbles on AI videos and he will watch them no matter how bad they are. I don’t think he has enough experience to understand how stories work so he can’t identify it as bad storytelling. The question is whether he or any other kid will ever develop that sense of what’s good if they are regularly exposed to AI creations. And if good is subjective, and AI slop is accessible to them, what is lost? Are they wrong?

Storytelling is so much a part of our history, culture and neurology I really think there might be some unexpected ramifications of this. I think it might even impact our relationships, as so much of our relationships is based on good storytelling.

Welp, there goes video games.

I did allow for visual fiction; as in no one* expects a video game to be presenting events that actually occurred.

*gawd, I hope so.

thank you for responding to fanboy so nobody else has to

My husband is in the film industry. He can’t watch shows that have poor production values. I, on the other hand, watch only for the writing and its delivery. I don’t care much about the visual aspects, and really enjoy some things he finds unwatchable. Objectively, when he explains these things to me, he is correct: the things I don’t notice are, in fact, poor quality.

On the other hand, I can’t read bad writing. I’ve tried to like Jim Butcher’s Dresden Files, because I like a lot of things about them, but the writing in the first couple of books is so bad (mostly characterization) that I can’t even.

I imagine it’s something like that: if you’re not particularly attuned to an aspect and don’t know good from bad, you can overlook it.

You linked to a study that showed that if you use LLMs consistently to write your essays for you, you become less skilled at writing essays.

That’s just incorrect. I wrote a lengthy post recently about the profoundly negative impacts that AI will have on large sectors of the job market, and in subsequent posts alluded to the undesirable wealth redistribution and inequalities that will likely result.

What I’m skeptical about are overwrought and over-generalized claims that AI will rot our brains and make us all stupid. No doubt AI will reshape some of our cognitive skills, rendering some obsolete while strengthening others, but that’s more a neutral statement than a negative. The knowledge and cognitive skills we have today that we need to deal with the modern world are not the same as those of the 18th century, and that’s just the nature of technology.

Yes, that’s a good point. I read very broadly and my reading has included some of the greatest works of fiction ever written. Maybe I have unreasonable expectations for something called Kidnapped by the Alien Warlord.

I did read the first Dresden book and I thought it was okay, but the MC was a bit of a cad. I don’t mind that necessarily in detective fiction (I enjoy plenty of noir) but the tone of the book was more light-hearted than your average hard-boiled detective novel, so rather than giving him some dark, fatal flaw, it just made him look like a bit of a buffoon.

Also I’ve been told that the author is a raging asshole. It may affect my opinion of his work.

I told myself this would be the last time, and now you’ve gone and encouraged me. :wink:

Oof, did you ask ChatGPT to summarize the study for you? Because this is just plain wrong.

The study was not only looking at skills in writing essays. It was looking at overall cognitive ability like neural connectivity patterns, memory recall, visual processing, and ownership the work created. Distilling this down to essay-writing skills is a profound misunderstanding of the study.

Heh, it’s worse than that. When you train a model on AI generated content, it often leads to model collapse. There is some debate about how this will work out in the real world, but recursively training on AI generated content does lead to problems.