Why AI-generated summaries are not to be trusted

Parse this AI statement, served up at the top of my Google results (it also has nothing to do with the question I posed, which was “European cities at the same latitude as Portland, Oregon”):

Although cities can be on the same latitude, they don’t always have similar weather patterns. For example, places closer to the equator tend to have warmer temperatures than places closer to the poles because they receive more direct sunlight.

Well, it’s basing its answer on the assumption that we can move latitude around,
but the sad fact of the matter is that we don’t yet have the technology to do that.

Sure we do. It’s called “erasers”.

But yes, AI-generated summaries are not to be trusted. Nor is anyone who tells you that they are.

What I find weird is that it first says that cities at the same latitude can have different weather patterns, and then to support that explains how being at different latitudes can result in different weather patterns. Both things it says are true, but they’re not related to each other the way the response implies.

ETA: I don’t find it weird that the AI is an idiot, or that it writes in an exceptionally confused manner. I find the AI response itself weird, and indicative of the AI being an idiot.

Ever since AI Summaries started popping up at the top of Google search results, I have been totally ignoring them and scrolling down to the “real” results. I don’t pay the slightest attention to them, because I was quite sure I’d see stuff like in the OP.

I asked ChatGPT if AI-generated summaries should be trusted. It gave me a long answer, ending with

In summary, while AI-generated summaries can be a helpful starting point, they should be used with caution and verified for important or complex information.

That sounds reasonable, but since it is generated by an AI, I don’t know if it should be trusted.

A whole bunch of websites have started creating AI summaries. I can’t stand them. They are always either obvious, or stupid.

We use Zoom for online meetings and there’s an option for a summary of the meeting that’s created using AI. It’s usually worth a laugh.

I’m beginning in my professional life to encounter business documents written by AI.

  1. They are always VERY obvious, and
  2. Are sometimes insanely incorrect.

Several times I’ve had folks whose first language wasn’t English give AI a shot as a way of helping them with that disadvantage in writing business documents and correpsondence. I’ve had to advise them really strenuously that their combination of English and an actual brain is ten bazillion times superior to AI.

Not sure why that surprises you. These LLM AI models aren’t really intelligent like we think of it.

Basically they’re an outgrowth of the same technology that lets a visual AI model be trained with a whole lot of pictures of clouds, and then be able to identify clouds, or maybe be trained on a bunch of data that shows patterns and be able to fairly accurately predict things. The big difference is that they’re specifically engineered to deal with language data - i.e. take a prompt in, parse it, and produce output in sensible, correct and intelligible language.

What they don’t have is the ability to evaluate which answer that fits the prompt is better or more valid, or whatever. It’ll find an answer that satisfies the prompt (however it does that), and spit the response back in a very coherent fashion. Nor do they generally draw conclusions about data elements- if there’s a relationship between things or a conclusion drawn in the data that it’s trained on, it will return that, but it won’t actually draw those conclusions on its own.

They’re only as good as what you train it with. If you took an LLM and fed it nothing but MAGA and right-wing stuff and asked it political questions, it would spit back coherent, grammatically correct and well formed language, but the content would be that same MAGA stuff you fed it. And if you fed it both sides, it would likely spit back both viewpoints or somehow combine them. It doesn’t have an opinion or any way to judge between the two.

I suppose you could come up with a different sort of AI to actually do that, and then integrate that, but from what I understand, we’re not there yet.

That’s why when you ask it something like “European cities at the same latitude as Portland, Oregon”, it’s going to parse that into whatever it comes up with for european cities, portland, and the same latitude.

And what you got back was entirely sensible in that it’s well constructed, and each part makes sense- cities at the same latitude may not have the same weather, and equatorial areas have more direct sunlight and are hotter than more northerly ones.

But the AI doesn’t really understand the relationship between the two. And really, why would it? It’s not really set up to do that - it’s just spitting back facts about latitudes and weather.

We do too, through Google Meetings, and I dread the day when those summaries become admissible in court. On the order of “Briny_Deep then asked whether latitudes were different than longitudes” when in fact I didn’t say a single word.

Even just typing up what they actually want to say, in their native language, and running it through Google Translate will probably do a decent enough job that you’ll know what they’re trying to say.

Hm. I’ve seen this thread before, but for some reason, this time I misread the title as “Why AI-generated submarines are not to be trusted”. And I was really, really hoping that the submarines in question were not boomers.

Well, it’s quite possible that an AI-generated submarine would be more trustworthy than an OceanGate-generated one.

In fairness, the Oceangate submersible did 50% of its job really well.

After all, it got to the right latitude!

In my head, I think about this issue as being analogous to early Wikipedia.

I just did a search about amethyst fading in exposure to sunlight. The AI overview told me this:

Yes, amethyst can fade in sunlight, especially when exposed to intense light for a long time. Amethyst is a type of quartz, and quartz stones can lose their color over time when exposed to sunlight. This is due to particle physics, which occurs when a photon from sunlight hits an atom. If the atom is color-charged, the photon will enter the atom and push out the color-charged particles, or quarks.

And I found where the AI got that crap:

https://www.angara.com/blog/can-amethyst-lose-its-color/

If the atom is color-charged, on the other hand, the photon will enter the atom and push out the color-charged particles (also known as a quark). When this process is experienced repeatedly, the atom eventually starts to run out of quarks which leads to the fading of the substance as a whole.

Isn’t it something, that after developing the most successful information-search tool for the web, they proceed to help sabotage it themselves.