Should I just start utilizing AI, finally?

I used those dashes a lot in my writing but I’ve had to change that otherwise it looks like I’ve been using AI!

AI might inadvertently bring back the semicolon! Nah, use the dashes as you see fit—they’re useful.

Siri was once handy for basic stuff like directions or weather, but today it feels like a relic. Compared to modern generative AI, Siri is a clunky shadow of what’s possible.

Generative AI actually understands context, helps solve problems, and even makes the occasional quirky guess—like a helpful, smart friend. Siri, on the other hand, feels like using Palm Pilot handwriting recognition in a world of smartphones.

It’s frustrating how far behind Apple is. They know it too. Their new “Apple Intelligence” feels more like a marketing pitch than a real leap forward.

For actual intelligence on my shiny new iMac, I skip Apple Intelligence and turn to ChatGPT–it gets things done.

In the context of you dismissing AI but actually appreciating AI data, no, I don’t think it does. It doesn’t matter if you go to a website or an app, what matters it that the data is derived from AI. Again, that’s not meant as a gotcha, I really believe you didn’t know you were benefiting from AI.

That’s not the same thing as the dynamic resizing. The keys don’t visually get any larger, but the area around the key that can recognize a tap of that key is expanded by a few pixels or whatever. It’s giving you a bigger target to tap without you seeing that it’s a bigger target. If the phone didn’t employ AI for dynamic key resizing, you would probably have even more trouble typing on the phone.

Have you turned off Auto-Correction. And Predictive Text?

The library.

I don’t think the usage of media in a library, and the outcomes of that usage, is analogous to the usage of media in training an AI model, and the outcomes of that training.

I totally agree, and as I became accustomed to using ChatGPT to recommend books I began to smell a rat–it knows way too much about the storylines, so much that it makes me suspect it has been trained on the content.

For example, it recently recommended I read a book called Orphan X, based on my enjoyment of the Harry Bosch series and a previous more…action-oriented recommendation (a wild book called “The Force”). I made it 3/4 of the way through Orphan X before I called “Baloney” and bailed, having a long conversation with ChatGPT about certain lame plot points.

It agreed with me and made note of my dislike for certain kinds of plot points.
But it really knew a lot about the issues I found in that book. It surely could have gleaned all of that from tens of thousands of reviews, but I suspect it also has the book itself somewhere in its corpus.

Of course, we do benefit from OpenAI’s sketchy use of training material: it is making stellar book recommendations for me, understanding what I personally like and dislike, not just nonsense like “people who read this book have purchased these other books…”

Probably.

Also used, “the pile”

Oh, it absolutely seems to know plot points and way more. I recently read “The Invention of Dr. Morel” based on its recommendation and plot summary and discussed the book as I read through it. It did not seem to be off about anything that happened in the book, or its themes, or its plot points. It even—and this impressed me most—withheld the major point of what this invention of Dr. Morel was until I told it I got to that section of the book. So even in its summary of the plot points before I read the book, it didn’t spoil it for me, but once I got to that point, spoilers away. I was a bit surprised by that.

I’ve done this “read and summarize what I just read” with a couple of books as I’ve read along (and it’s been spot-on with its recommendations so far for me) and I think I only have come across one error.

When the training and review data includes not only the text of the book itself but the copious online summaries of the book intended for students to crib from, it is scarcely surprising that a chatbot could reproduce events, plot points, and themes of the book discussed in those summaries. Try getting an analysis of a book that hasn’t been subjected to a multitude of analyses and thematic dissections and see what you get, or ask a chatbot to identify themes and character motivations that aren’t actually in the work and see what you get.

Stranger

I’ve done this with my own writings, and it’s quite impressive. It’s as if it knew better what I was writing about than even I knew. (And I think that’s true–it picked up on a lot of subconscious stuff I knew was loosely connected, but couldn’t quite pinpoint or verbalize.) And, yes, I expect it would BS an answer if I ask it about something that never actually happened in a story or book. That’ll be improved at some point, I would think, but what it does now is fucking amazing.