AI is wonderful and will make your life better! (not)

Good question. I suspect you’re right when it gives you an answer and you then ask how it came up with it.

But I don’t know about the “deep thinking” mode where it tells you the steps in real time. Anthropic determined that sometimes the first step is coming up with an end point, and then it backs into logical steps leading to that. They didn’t extrapolate that to every case.

Yesterday I heard this very informative piece of Technology reporting, indicating that our corporate AI overlords are here to help us… shop shop shop!

Did you know that:

“Artificial intelligence is changing holiday shopping. According to a survey from Adobe Analytics, just over a third of shoppers in the U.S. have used artificial intelligence to help with online shopping. Adobe projects that traffic from AI sources to retail sites this holiday season will increase over 500% from last year.”

… of course, it’s pretty easy to boost your analytics when you force AI tools into the experience.

The AI “expert” in this story tells us all how to turn shopping into a fun game… leverage AI so you know when to buy something at the absolute winningest low price! AI is such a benefit! It will allow us to ask for “recommendations of the top 5 TVs that will be the best price.”

Now there are life’s problems solved! Amazon will show you 600 drop shipped variations of the Slap Chop, and then you can use AI to tell you which one goes the best with your personal brand. And it’s all free- ethically, economically, culturally… the benefits of course outweigh the costs!

But don’t forget, even when you’re shilling for corporate retailers and data miners, you’ve got to remind the folks that the reason for the season is really family:

“The most meaningful gifts that I have ever gotten have come from my children, and it’s just something that they made by, you know, their own sweat on a piece of construction paper. And those are the things we love and cherish.”

… but also using AI to buy stuff.

This is a prompt enhancement technique. The first query to the LLM is ‘here’s a task, tell me how you will perform it’, then the second query is ‘here’s a task and here’s how you want to perform it’. This helps keep the final LLM focused on the big picture while it’s filling in the details.

I’m sure it’s more involved now and the first query might go to a fine-tuned LLM, etc.

I’m not particularly impressed when the answer it comes up with is wrong, which I freely acknowledge sometimes happens. But I strongly dispute this silly idea that “it’s making it up after the fact”. It’s clearly iterating a path to an answer to a problem. That it does it in real time (a new behaviour in GPT 5) refutes the baseless assumption that it does it “after the fact”. You may be thinking of older models where you ask it to explain its reasoning after the fact. The steps that are listed in real time appear to support the eventual answer.

Unfortunately I can’t capture an example because the GPT-5 reasoning steps are temporary lines of text that get overwritten by subsequent steps as it iterates toward a solution. I am, however, fairly impressed that even when its answer is wrong, if you give it additional guidance it can identify the misstep where it went wrong, and produce a revised answer. I recall this happening when asked a question about where a heavy object would land when dropped from 400 km up in a hypothetical space elevator. It was actually quite an interesting dialogue even when its first answer was wrong.

A cosmologist has something to say:

This thread is like my cheat day. I don’t usually wrestle with pigs, but every once in a while, it’s good to get it out of my system.

Quite so, and when an LLM describes its deliberative process in real time as it goes along, it is by definition not “after the fact”. This is simply the definition of how things work in temporal sequence. And that’s a fact. A fact that seems unable to penetrate the minds of neo-Luddites.

If I seem overwrought on this particular subject, it’s because I frankly resent the way that the work of many brilliant AI researchers is being dismissed in the manner of the Dreyfuses and Searles of the world who AFAIK never did anything useful or productive in their lives except wrongly and unfairly criticize the work of others. And the presumed superior knowledge of certain of my opponents is rather tiresome.

I think Marvin Minsky’s quote from back in the 60s is still relevant, when he said “when you explain, you explain away”. What he meant was that when the typical layman thinks he has some vague understanding of how a particular AI works under the covers, there’s a tendency to dismiss it as just a computing “trick”. There comes a point with greater and greater scaling of AI when that naive assumption turns out to be terribly wrong.

Oh my. Comedy gold.

I periodically speak to AI researchers in a professional capacity and my impression is that they’re much, much more exasperated by hype than by dismissiveness. They tend to, broadly, be in favor of a careful and even skeptical approach to current and future applications of the technology.

I agree. There’s been too much hype ever since the 60s, occasionally by over-enthusiastic researchers, but mostly by the media. Today, with AI having enormous commercial potential, some of the hype comes from the promoters of those commercial interests. I don’t dispute any of that. What I object to is those who underestimate its potential, particularly dumbasses who describe large-scale LLMs as “just next-token predictors” or “stochastic parrots” (or, worse, who consider the LLM model the only possible implementation of AI, which it never was and never will be) or who apparently think Marvin Minsky was a stand-up comedian.

This seems like a huge issue to me. If the tool is unreliable, why on earth would you use it? Especially when it is attempting to do something that you could as easily do for yourself?

I know that AI is useful in combing through large sets of data, and I freely admit that it has some uses. I will also admit my own bias: I only see it when my university students use it, and it inevitably produces something between a zero and a C-. Yesterday, someone asked it to write a reading response to an article. The article involved learning disabilities; the response mentioned the article’s discourse on physical disabilities, which the article did not in fact cover. Instant zero, as the assignment was to show that they had read and thought about the piece.

I just can’t see how this blunt an instrument can be useful to people.

Saw this chart on Noah Smith’s substack and, with some modifications, thought it apropos for this thread…

Imgur

lol. I’ve said multiple times in this thread that I use AI for coding, but I appreciate being included.

When I encounter people wanting to know about something in my domain of knowledge, especially when they are skeptical about it, you can bet I do my damnedest to bring my A-Game. You can’t even be arsed to read your own cites, and you’re complaining about how mean people are for not buying everything you claim. It is exactly this dynamic that makes me skeptical. If you stan the AI researchers so much, maybe give some thought to how you’re presenting their case.

Time for a Poll, be right back with the link…

I’ve never claimed that GPT was useful for composing essays and the like, or as an unassailable source of veracity for doing so. It is not. I’ve never said or implied that it was.

What I believe, and which I think has been largely misinterpreted by skeptical neo-Luddites here, is that the LLM implementation in GPT – specifically, GPT-5 – is a very, very effective query engine when used with its context-retention and iterative query capacities. This isn’t just a “feature”, it’s a fundamental principle of how it provides information and enhances human knowledge.

Can it be wrong? Of course it can. So can anyone! So what? When it provides – through conversational iterations – useful information that can be confirmed (or not) then it’s a productive tool that enhances the human condition. The information exchanged in the iterative dialogue – the assertions and negations and the search for data – surely helps to make us better informed. That is surely an objective benefit!

.

I will say that I don’t understand this argument because we use unreliable tools all the time, many times because even an unreliable tool is better than no tool at all.

Man, I can’t even make fun of this. It’s a long form essay from someone inside the California State University system. At the same time they were implementing budget cuts because of lowered attendance and burgeoning costs, they formed a partnership with OpenAI to create some kind of AI – enabled university… but without consulting faculty on how that was going to work.