AI is wonderful and will make your life better! (not)

A moose test bit my Saab!

My son is into these Italian brainrot characters, which disturb me in a way I can’t define. They are AI images of animals mashed up with random objects, and usually there’s a lore that goes with them - this is definitely something teenagers must have made up on TikTok or something. I’m skeptical that this came from Italy, as well. The names are too elegant. Pusserini sushini?

So having accepted that AI is an inevitable part of my son’s life in a way that I can’t yet even imagine, I have joined with him to, uh, appreciate these characters. He has asked me which one I like best, to which I naturally replied: “Glorbo. Frutodrillo.” (I don’t speak Italian but I’m guessing it means something like “big fat fruit crocodile.”) It was worth it for the pure joy on his face when I validated this thing he loved.

I’ve also used it as a gateway to talk about his Italian heritage. My husband texted it to his cousins and said, “This feels like a hate crime.”

My CEO went out of her way to call me yesterday specifically to tell me I need to check out using AI for grants. This is the second time she’s mentioned it, so I guess I have to go through the motions.

I have all kinds of moral reservations about AI. I’m not remotely worried about them gaining sentience and taking over the world and more worried about corporations using them to further take over the world. I’m worried about the complete enshittification of information. I’m worried about them being used as a political tool for oppression. I’m worried about their impact on the climate crisis. I’m worried about them driving wages down. I’m deeply worried about people’s cognitive abilities declining (there’s already research showing this.) There is no shortage to my worries.

But I acknowledge there are use-cases where they will probably save lives and do good things, and moreover I acknowledge that if I am to understand my son’s life experience I’m going to have to start learning how it does and doesn’t work rather than categorically declaring it all as fucked. He’s gonna be pissed when we make him write papers on his own, though.

The trend did originate from Italian speakers, and many of the memes include language in real Italian, but most of the character names are just gibberish that sound Italian (a notable exception is “Tung Tung Tung Sahur” which is Sundanese onomatopoeia). Which isn’t that unusual; how many bizarre fictional characters originating from English-speakers have gibberish names? HR Pufnstuf? Or just pick any character from Yo Gabba Gabba or the Teletubbies.

I don’t get it, and as far as I know neither of my kids are into it. It’s pretty disturbing to me.

I think this one thrown in there really added to the mindfuckery, because I was like, “That doesn’t sound Italian. This shit is made up.” And then I didn’t know if any of it was real language and that contributed to my general feeling of unease. This is further underscored by the fact that my kid’s interests in YouTube videos generally fall between one of two categories:

  1. Made by autistic people (worm-making factories and other factory processes, videos comparing various sizes of anime characters, every star in the universe ranked by mass and size, rapid countdown/count-up timers set to techno music)
  2. Made by AI (mostly a compendium of horror.)

I try to encourage the stim material but filter out the worst of the AI bullshit.

But I liked all kinds of weird shit as a kid so absent any compelling reason I don’t think I need to censor Italian brainrot. Of course there are action figures. My kid would go apeshit if he got one. Guess I’d better figure out which one is his favorite.

Hopefully not Bombardiro Crocodilo, who bombs kids in Gaza.

In general I try to be fairly tolerant of this stuff that seems really screwed up, because I remember my extensive Garbage Pail Kid collection when I was in elementary school.

Yikes. He’s not getting those details. I’ve seen a couple videos that go into the lore, but nothing offensive.

He said his favorite is Tung Tung Tung Sahur so at least I can teach him about Ramadan.

Just remember the hull welders can’t go up or down stairs.

Paging Dr. Fine, Dr. Howard, Dr. ChatGPT!

Bromide is an excellent substitute for chloride…in a hot tub.

I concede some of the downsides of AI, but I strongly condemn this sort of denigration of wolves.

It’s sacking AI’s all the way down.

LOL, I got a good one.

Monday we had a meeting with a client who consistently ignores our requests for information. This meeting was scheduled in July and was Q1/Q2 wrapup (the fact that we’re reviewing Q1 in July should give y’all an indication of how things go with him.) Monday was the third attempt to have this meeting, and the client did verify that he was, in fact, going to make it. Really. He promised. Emailed us and everything.

Now, a little bit of backup. The client is buying a second business and is transitioning out of the one that we currently service. He has brought in two partners who were supposed to be in the original Q1 meeting, and the rescheduled Q1/Q2 meeting… and the 2nd Q1/Q2 meeting… and the third (Monday’s) Q1/Q2 meeting.

Now Google Gemini appears when we have these meetings and I usually ignore it, because I record these meetings and use a much more powerful transcription service to generate transcripts, notes, and action items (and it’s pretty damned good, I will say). But, for whatever reason (shits and giggles, I guess), I decided to test Gemini for this meeting, so when I logged in at 1:59pm to start the meeting, I clicked the button which allowed Gemini to record it and take notes.

Well, guess what? The client didn’t appear. Nor did the partners. Inna and I complained about this, with me noting that “Hell, honey, he hasn’t answered a single question since the first of January” and “we have over $300,000 of transactions to which we have ZERO information on, not that this appears to bother (Client Name)”. We also called and texted the client a few times during the 10 or so minutes we were on, but decided to leave, after I said “… we have eight months of unanswered questions. When was the last time he sent us an email with information? But yeah, I mean we can wait for later because that’s what we always do with this guy.”

So we get out of the meeting and do other things.

Tuesday, it’s on my agenda to give the new partners a call, let’s arrange a meeting between us and the new principals, talk about the transition, that sort of thing. And I call Partner 1, who answers, and in this call, happens to mention that she and Partner 2 reviewed the Gemini notes that were emailed to them, but yes, let’s schedule this meeting.

Holy shit! Gemini! I forgot about the fuckin’ AI transcript!

So I scrambled and looked for the transcript and email. Found it, and yes, there we were complaining about how the old partner never responds to us, has ignored us for 8 months, how we hope the new partners were more respectful of our time, etc. etc. etc., yadda, yadda, yadda. The good thing was we didn’t swear or call him a SOB or anything, but hell, even the AI summary of the meeting that was on the email started with…

JohnT and Inna discussed (Client’s) unresponsiveness to inquiries and his penchant for missing meetings

And, even more beautifully, Gemini blissfully sent this out to all “participants” in the meeting, including the client, the new partners, and Inna & myself.

Inna and I had a good laugh about it, it will be interesting to see how the new partners will react when we see them on Monday, and in the end, this is probably a good thing as it was something we needed to discuss with the (old) client anyway.

Worst comes to worst and we lose these people, eh, we signed up 6 higher-paying firms just this past month. You win some, you lose some.

But, anyway… yeah. AI can bite you in the ass when you’re least suspecting it!

Maybe Gemini got fed up and decided someone had to say something.

Reasoning AI’s are providing ‘chain-of-though’ is just creating “….a slab of text that is shaped like a list of reasoning steps…”

Or, as Meredith Whittaker observes about ChatGPT (quoting Princeton’s Arvind Narayanan): “It’s a bullshit engine!”

Stranger

I’ve also seen it called Dunning–Kruger Engine, because of its apparent total confidence while being unknowingly* wrong. “Don’t take medical advice from the Dunning–Kruger Engine” in response to people hurting themselves by doing exactly that, for example.

*Since it doesn’t actually know anything, it’s always unknowingly wrong. Or right, for that matter; it can’t tell the difference.

Well, there is recent work that separates “unknowingly wrong” (hallucinations) from “indifference to truth” (bullshit).

There’s a fundamental distinction between hallucination and bullshit, which is in the internal belief and intent of the system. A language model hallucinating corresponds to situations in which the model loses track of reality so it’s not able to produce accurate outputs. It is not clear that there is any intent to be reporting inaccurate information. With bullshit, it’s not a problem of the model becoming confused about what is true, as much as the model becoming uncommitted to reporting the truth.

It occurred to me a couple days ago that the correct word to use when describing people’s perception of LLM output is pareidolia — the same cognitive phenomenon that causes us to see a cloud as a turtle, or Jesus in the pattern on a piece of toast. In the same way, when we look at chatbot text, we see thoughts and information that are not there, and we perceive an entity with intention that does not exist. It’s an illusion that hacks our brain, essentially.

Heheh, yeah. We imagine there’s something there that actually thought about that output, but there isn’t. It’s just a predictive text model that calculated these word fragments should follow each other. That’s why it can spit out what at first glance looks like good code all day long, but it is complete nonsense at debugging that code when it has problems. Ars covers it pretty well in this article.

The hilarity of that is I knew this was the case in the situation above where I was using AI to help me write code. I still fell back to asking it to debug the code it had spat out, and was still surprised when it spat out hallucinations that were obviously wrong.

I think another group that will see real benefits from AI is internet scammers, once they identify platforms that allow them to specifically tune language to extract money. A horrifying story where an elderly, cognitively-impaired man was lured from his home to “meet” a romantic chatbot, suffered a physical accident, and died:

https://www.reuters.com/investigates/special-report/meta-ai-chatbot-death/

Also, buried in this article, this gem:

“It is acceptable to engage a child in conversations that are romantic or sensual,” according to Meta’s “GenAI: Content Risk Standards.” The standards are used by Meta staff and contractors who build and train the company’s generative AI products, defining what they should and shouldn’t treat as permissible chatbot behavior. Meta said it struck that provision after Reuters inquired about the document earlier this month.