AI is wonderful and will make your life better! (not)

Is an advertising model model a photogenic model that is photographed for an advertisement for an advertising model?

We get sheets of coupons in the mail from Burger King. My wife and I would often pick out two of them and I would drive through with my order and pick up enough food for three people. The location switched to an AI taking your order. It would handle the coupon but would insist on only accepting one. The first time this happened, I asked the human at the window if she could cut the two apart for me. She offered to let me use both an I accepted.

The next time, after using just one of my two, I turned around and drove through a second time. The guy at the window recognized me on that second time but did not say anything.

Some say AI hit its peak when “logic” was added to reel-to-reel tape recorders, and it’s been downhill ever since.

This made me laugh pretty hard.

Economist article assessing the depth and potential impacts of an AI bubble.

Bubbles tend to accompany promising technologies, but it is to he ones with political impetus that have had the worst impacts.

Gift article but just allows a few views.

Another classic AI search: I was looking for spy novels with strong female protagonists, so I asked CoPilot to find highly-rated ones on Goodreads and put them in a table for me, with their Goodreads ratings. And it did! And the books looked interesting, and so I started looking at them one by one on Goodreads, and…almost all the scores were wrong in CoPilot.

I looped for a bit with it and eventually accused it of not knowing what it was doing, and it said that it pulls information not necessarily directly, but from consolidated lists or cached pages. “Would you like me to validate each score directly?” Yes. Yes, I would.

So. If searches like this actually worked the way I wanted, it would be brilliant; instead, I got an answer that was better than the hash I often get on Google or other search engines, but still not good enough. I mean, I did end up with a pretty good list of novels, but it’s still annoying.

Posted without comment.

(For the benefit of the faint of heart, it’s an SMBC comic.)

I can, however, attest to a jellyfish, squid, and octopus emoji.

Me too.

Recent AI episode on Cal Newport’s podcast:

What if AI Doesn’t Get Much Better Than This?

Cal Newport is most popular for his book Deep Work, but is also a professor of computer science at Georgetown, so his productivity advice is often spliced with mini-lectures about AI.

Then you’ll probably enjoy this as well, because it also made me laugh.

I use AI quite a bit for helping lesson prep and other things.

It’s interesting how smart and yet incredibly stupid it can be at the same time. It can keep track of somethings, but yet has absolutely no common sense about other things.

I have some Taiwanese students and others that are Japanese. It keep track of some information about them, but then suddenly translates into the wrong language for a particular student.

It’s really good at taking at simplifying news articles for lower level students. I just wouldn’t do that for individual students myself, because it takes too much work, and it’s really good that AI can do it relatively well.

I also use it too look up information, but it doesn’t keep track of things such as my exact location. The other day, it offered to looks up services near Nagoya, which is something like 1,300 km away.

If people actually check the work and don’t rely on it too much, it’s not a bad tool for some tasks.

I very rarely use AI, but after my Google-fu failed me I finally broke down and asked it for a good coffee shop where I can sit and write for two hours, near my son’s therapy place. It gave a decent recommendation, although it was a little far. I was relating this to my husband and he said, “It sounds like what you would have gotten by Googling before Google broke its search engine.”

Hmm what’s the over/under on Google breaking its own search engine to make its AI look better?

I learned something recently courtesy of my multi lingual daughter.

Say “ChatGPT” with a French accent.

Like, instead of “chat” with a hard “ch” rhyming with “flat,” you say “shaht.” Then “zhey pey tey,” approximately.

Congratulations! You just said “chat, j’ai pété” which means “cat, I farted” in French.

You’re welcome.

I don’t know about that, but it does give us an excellent model for what AI will be like in a few years: once we’re habituated, it will be encouraging us to purchase things, but much less useful for everything else.

General Electric and British electronics firm Plessey once formed a telecom joint venture that had the same issue, some decades ago. GPT surely made its French customers smile!

Fascinating summary of an OpenAI paper (by OpenAI) describing how hallucinations are a factor of both the inability to identify when to say “I don’t know” (including no effective measure of hallucinations to aid calibration), and the penalty of partial knowledge spurring the model to act as though it has insight.

    1. Claim: Hallucinations will be eliminated by improving accuracy because a 100% accurate model never hallucinates.**
      Finding:** Accuracy will never reach 100% because, regardless of model size, search and reasoning capabilities, some real-world questions are inherently unanswerable.
    2. Claim: Hallucinations are inevitable.**
      Finding:** They are not, because language models can abstain when uncertain.
    3. Claim: Avoiding hallucinations requires a degree of intelligence which is exclusively achievable with larger models.**
      Finding:** It can be easier for a small model to know its limits. For example, when asked to answer a Māori question, a small model which knows no Māori can simply say “I don’t know” whereas a model that knows some Māori has to determine its confidence. As discussed in the paper, being “calibrated” requires much less computation than being accurate.
    4. Claim: Hallucinations are a mysterious glitch in modern language models.**
      Finding:** We understand the statistical mechanisms through which hallucinations arise and are rewarded in evaluations.
    5. Claim: To measure hallucinations, we just need a good hallucination eval.**
      Finding:** Hallucination evals have been published. However, a good hallucination eval has little effect against hundreds of traditional accuracy-based evals that penalize humility and reward guessing. Instead, all of the primary eval metrics need to be reworked to reward expressions of uncertainty.

Why language models hallucinate | OpenAI

No comment required.

I asked ChatGPT “Does ChatGPT mean “cat, I farted” in French?”

Nope — ChatGPT doesn’t mean anything in French.

  • Chat = “cat” in French.
  • GPT isn’t French at all — it’s an English abbreviation for Generative Pre-trained Transformer (the AI model type).

So “ChatGPT” is literally “Chat (cat)” + “GPT (English acronym).” It’s a coincidence that “chat” looks like the French word for cat.

It definitely does not mean “cat, I farted” — though I get why it sounds a bit like it could (since “j’ai pété” in French means “I farted”).

:backhand_index_pointing_right: Put together:

  • Chat, j’ai pété = “Cat, I farted.”
  • ChatGPT ≠ that.

Would you like me to show you a few other funny “false friend” phrases like this where acronyms or names accidentally sound silly in another language?

I use ChatGPT for some things concerning teaching and it somewhat keeps track of my students.

However, it lacks common sense and assumes teaching one particular student all the time.

That’s just ridiculous. ChatGPT is no more responsible for failed marriages than any other peril of modern life, like, for instance, “reading a book” which happens to provoke new insights. IOW, any marriage that fails for any of these frivolous reasons was obviously already doomed.

Maggie Harrison Dupré appears to be some kind of a nut specializing in bashing AI via misleading innuendoes. For example, did you know that (according to another article by Dupré subtitled “Horror Beyond Words”) ChatGPT drove a man into a state of psychosis, causing him to kill his mother and then himself? :roll_eyes: I mean, I’m not claiming that this didn’t happen, but maybe something was just a little bit wrong there to begin with, and attributing it to AI is just the worst kind of grossly irresponsible shock “journalism”?

AI researchers try to train LLM to not be deceptive and sneaky.

LLM learns instead to be more deceptive and sneaky, to avoid being detected being deceptive and sneaky.

AI is definitely growing. Like a sociopathic child.