AI is wonderful and will make your life better! (not)

Here are Shumer’s specific examples of AI being good enough to take people’s jobs:

I’ll tell the AI: “I want to build this app. Here’s what it should do, here’s roughly what it should look like. Figure out the user flow, the design, all of it.” And it does. It writes tens of thousands of lines of code. Then, and this is the part that would have been unthinkable a year ago, it opens the app itself. It clicks through the buttons. It tests the features. It uses the app the way a person would. If it doesn’t like how something looks or feels, it goes back and changes it, on its own. It iterates, like a developer would, fixing and refining until it’s satisfied. Only once it has decided the app meets its own standards does it come back to me and say: “It’s ready for you to test.” And when I test it, it’s usually perfect.

the managing partner at a large firm, spends hours every day using AI. He told me it’s like having a team of associates available instantly. He’s not using it because it’s a toy. He’s using it because it works. And he told me something that stuck with me: every couple of months, it gets significantly more capable for his work. He said if it stays on this trajectory, he expects it’ll be able to do most of what he does before long… and he’s a managing partner with decades of experience.

In other words, he could not come up with any specific example.

His first piece of advice to deal with the current overwhelmingly competent state of AI is to “[s]ign up for the paid version of Claude or ChatGPT.” His second piece of advice is to use them a lot and for everything.

I just can’t imagine anyone actually falling for such obvious, self-serving bullshit. YMMV, of course.

Here’s an annual state of coding report that has changed in flavor over the last couple of years for obvious reasons. In the latest review, developers talk about improvements in efficiency using LLMs, but then also just replacing the toil of coding with the toil of reviewing bad coding, plus security concerns.

Shumer is definitely an AI evangelist and, like all evangelists, should be taken with at least a grain of salt. But I don’t think he’s entirely wrong – at worst, his predictions may be premature, and he may (or may not) be wrong about the overall socioeconomic consequences, but AI is definitely going to change the way we live and work.

The problem is that report is from survey data ending October 2025. The coding tools are already dramatically better now, in particular Claude Code and OpenAI Codex. If I had been surveyed, my answers now would be quite different than Q4 2025. The frontier models, and systems like Claude Code built from them are becoming more powerful and making fewer errors quickly. I don’t know where the ceiling will be, but the tools now are already huge force multipliers.

Absolutely. GenAI might not be the right tool for writing legal briefs or crafting interesting stories…but it sure can code!

Over this last long weekend I tackled two coding projects:
Project 1: A big-screen clock for the Flipper Zero. My nephew gifted me a Flipper Zero (cool hacker’s Swiss Army Knife gadget), and I wanted to write an app for it that would make a large clock that I could read at night without glasses. These use code written in C, and I have no clue about their architecture or the build process. That was a 3 hour project, with ChatGPT5.2 explaining the build, designing the project, helping me get it all into GitHub, and me not writing one line of code, just explaining how it should look. Very cool.

Project 2: Full home lab setup of Oracle 26ai Free and Ollama LLM on separate spare desktop hardware, both installed using VMs in Proxmox on each machine.
End result: using SQL*Plus on my Mac to connect to my new database on one and run AI-based queries on “Alice in Wonderland”.

Both of these would have taken weeks of Stack Overflow and RTFM, with little hope of making it past certain barriers (Got your Oracle Wallet set up on your Linix box?)

Horses for courses. It felt exactly like I had a small team helping me, and I was leading the design while they solved the snarly problems. It’s not merely a case of trading coding hours for code review hours–the code is actually written very well. A year ago I wouldn’t have said so.

My major complaint? Sometimes there was too much thrashing and I would have to call it and say “stop thrashing. let’s try a different approach”. But that happens with human developers too.

Fair enough! I’m a casual coder and haven’t tried relying on chatgpt for end-to-end projects for a while.

I was thinking about something this morning. AI might disrupt some jobs but I don’t think smaller companies could afford it. Especially if you’re only trying to replace one or two employees. When you get into AI variations designed for specific jobs, that’s going to be expensive.

Today, I googled the actor Jacob Tremblay. Google’s AI just told me that

Overview
Jacob Tremblay is a Canadian actor born in Vancouver, British Columbia in 2006. He became an acclaimed actor in elementary school, known for his role as Jack in the movie Room. In Room, Tremblay plays a child born in captivity to a young woman played by Julia Roberts.

But Julia Roberts wasn’t in Room. Brie Larson was. AI picked up from one of its links that they were in a movie together, but that movie was Wonder.

Gemini told me:

Room (2015): His breakout role as Jack, a boy born in captivity, alongside Brie Larson. At just nine years old, he became the youngest nominee for a Screen Actors Guild Award and won a Critics’ Choice Movie Award.”

I do pay for it and don’t rely on the free versions. Those using the free versions are typically a year or more behind on available models, which were more flaky.

I don’t rely on it. I use it only rarely for Excel and for writing prompts. I never use it for google searches. But I was very surprised that the first thing that appeared at the top of the search page was obviously false AI info. Just like a couple weeks ago, when I was looking up a notorious local murder from awhile back, the AI mixed up the name of the victim, the name of the killer, and the stage name of the killer.

I get told false info fairly regularly from friends who don’t understand that google’s AI spits out mixed-up information.

Wrong use of the term “rely” on my part. To be more pedantic, I don’t use the free versions. Did you use the free version or did you use a paid version?

If you don’t know, ask Gemini which iteration you’re using.

Here’s what Gemini 3.0 told me about Tremblay:

I’m not going to source every piece of information given, but he did appear in those roles, that is his birthday, his dog is named after a SW character, and those are his sisters.

Now, to your point, when I just “Googled” him, I did get this snippet:

So, who knows? I don’t rely, er, use the snippets either. Here’s what Gemini told me when I brought this up:

That snippet is a textbook example of a “hallucination” caused by data bleeding between two of Jacob Tremblay’s most famous movies.

The incorrect association with Julia Roberts likely stems from his work in the 2017 film Wonder, where she played his mother, Isabel Pullman. Because Tremblay became famous as the “breakout star of Room”, search algorithms and earlier AI models often mash these two facts together:

The Fact: Jacob Tremblay starred in Room (2015) with Brie Larson.
The Other Fact: Jacob Tremblay starred in Wonder (2017) with Julia Roberts.
The Error: The algorithm cross-contaminates these entries, leading it to claim Julia Roberts was in Room.

Why this happens in Search Snippets

  1. Semantic Proximity: In countless articles and YouTube interviews from 2016–2017, the phrases “Jacob Tremblay,” “Room,” and “Julia Roberts” appear in the same paragraph as he was being cast for Wonder.

  2. Parental Roles: In both movies, Tremblay plays a child with a deeply emotional bond with his mother. The system associates the “Mother” role in his filmography with the most famous actress he’s worked with in that capacity—Julia Roberts—rather than Brie Larson.

  3. Search Indexing Errors: If a highly-trafficked site (like a fan wiki or a poorly-written entertainment blog) makes this mistake once, Google’s “Knowledge Graph” can ingest it as a fact, which then gets repeated in the “Overview” section you saw.

I have no idea why you’re trying to school me on this. I’m simply reporting what Google’s AI search engine wrongly comes up.

Because AI is like religion. You can never, ever question it, even in threads specifically started for questioning it. If it did something wrong, it’s because you’re just not doing it right. Flaws must be constantly hand-waved away lest someone lose the faith based on your faithless example.

Don’t forget, AI will make all your dreams come true, but it’s also going to destabilize the world economy. Who are we to question AI?

So what happens when LLMs become almost never wrong, enough so that we fall into the habit of trusting them; but occasionally they will still be SPECTACULARLY wrong, in contexts that may cost people their lives? We would not want a bridge design that almost never fails, but have it turn out that driving a blue truck with red lettering on the sides somehow causes it to collapse. That’s the sort of hidden trap AI seems to portend.

I wouldn’t be surprised if there’s already such a disaster in the making. The question is whether it saves more lives than it kills.

I finished the book I mentioned in post #691. Chapters 7-9, about 40 pages would make an awesome movie. The AI’s don’t WANT to kill off the human race. Just like a chess program doesn’t WANT to win the game.

I have never used ChatGPT and never will. My brother recently said it’s great for some applications, when you want to compose a tedious email and just give ChatGPT a few breif prompts and boom it’ll lay out a tidy batch of text that more or less expresses what you want to say, but for me, nah - I’ll gladly deal with the tedium of having to hash out (struggle, even!) composing whatever it is I need to say.

Probably been discussed already in this thread for all I know, but the more you rely on ChatGPT, the more you use it, the lazier you get, and the lazier you get, the less you are exercising your brain, which is its own sort of muscle that indeed requires exercising (like composing text of any sort). The less, and less, and less you “exercise” your noggin, the more susceptible you become to things like dementia. Cognitive training comes in a plethora of forms to help combat future mental illnesses.

Also not crazy about the growing number of AI data centres sucking up more and more of the four major power grids.

Eh. I mean, different strokes and all that, but I regularly play chess and bridge and a lot of other board games, and solve a suite of puzzles every day…if that’s not enough, nothing is.