AI is wonderful and will make your life better! (not)

Good! I was just speaking more on a personal level that I hope was not being construed in any way a critique of your own ChatGPT use. If my post did come across like that - apologies.

Oh hey, sorry for misconstruing that, and thank you for clarifying.

I have access to Lumo via my ProtonMail subscription.

I recently tried having it make up stories.

The first was to “tell me a story from a rattlesnakes point of view of hunting for a mouse”. The results were kind of boring.

Then, naturally, I tried “tell me a story from a rattlesnakes point of view of hunting for a horse”. The results of this one was rather bizarre.

So I followed that with “tell me a story about fishing for elephants”. That one was very strange and unsatisfying. The “elephant” in the story was really some kind of magical water.

Then I asked for “tell me a written in the manner of Dr. Seuss”. It did rhyme but was boring.

Then for a story about “tell me a story about a high school boy gigging for frogs”. This one wasn’t too bad other than being pretty sappy – he did it for science, not for sport and then gave a talk on his experiences to the city council.

I have been a bit curious about how far people working together could push it toward being wrong by creating a number of web sites with vastly wrong information and supporting each other’s nonsense.

For example, suppose that there were a number of web sites that claimed completely different accounts and scores for football, basketball, or baseball games. How many web sites would it take to make AI believe that Notre Dame won the 1980 Super Bowl? Or that the Washington Warriors won the 1875 World Series?

No, AI is more like the “horseless carriage” in the late 19th century – failure-prone and ridiculed (“get a horse!”) but very, very seriously underestimated. Even today, many of the examples of AI being “stupid” are from older or inferior implementations.

This would be my view. In Q3 and entering Q4 of last year, I would occasionally use AI agents as a secondary tool which would often be helpful but also frequently would feel like I was fighting the tool to do the right thing. Somewhere around Decemberish things changed and now I would classify Claude as my primary tool, using IDEs and other clients and browsers for review and verification. Rarely, as in almost never, do I feel like I’m fighting the tool anymore, just nudging it incrementally in the direction I want. Just the sanity saved by no longer having to work with AWS directly has been more than worth it.

I mean, I don’t even know how to respond to this.

I never would have learned to program. Never. I don’t have the time, interest, or inclination. When I’ve needed programming to be done, I hired programmers. Was hiring people also not exercising my brain? Was this lazy?

Now, when I need programming done, I use Gemini, tell it what I need, test what it gives me, I tell it what to revise, rinse, wash, repeat, and reiterate until I get a finished product. Is this also not exercising my brain? Is this lazy?

I’m so confused.

Last Fall I googled “Are the Yankees in the World Series playoffs” and Google’s AI response said that “No, the Yankees were eliminated from the 2025 World Series in Game (whatever it was) of the 2025 post season.” Even though they were in a game that was to be played later that same night in which they won. :roll_eyes:

At work yesterday, I asked Gemini to find me a download site for some software tools I need. All the hyperlinks it gave me were to gemini.google.com Even after I pointed it out three times. The funny part was, the text was the URL I was looking for. It’d say, www.thattoolyouwerelookingfor.com, but if you clicked it, it’d go back to Gemini.

That’s an interesting experience.

It points out one of the weird idiosyncrasies of the current generation of LLM AIs. Because they lack any foundational model of the subject matter they’re answering about, they propose things which are clearly ignorant of basics.

Like somehow, the Domain Name System of the Internet will allow you to access anything on the Internet by an address “www.<thingname>.com”.

AI responses like that make me want to shout “That’s not how it works. That’s NOT HOW IT WORKS AT ALL!”

I should have been more specific. If it’s a highly technical task that’s not only beyond one’s ken but also too much of a lift for googling (sure, I wouldn’t bother googling “how to program”, myself), then - yes, I’m sure that in those particular instances ChatGPT would be a valuable tool for overcoming such highly technical obstacles. I should have made a clearer point that if you don’t really need ChatGPT, if you’re capable of accomplishing a given compositional task - albeit with some stumbling/revising along the way - then absolutely screw ChatGPT.

You may be interested in this pre-publication of a study “How AI Impacts Skills Learning” released earlier this month through Cornell University. The study involved 2 groups of “experienced” Python coders (total N=52) who were given two simple tasks requiring the use of an unfamiliar asynchronous library. One group was given access to an AI agent and instructed to use it to complete the task, the other group was explicitly told not to use the AI agent. Both groups were then given a quiz designed to test the knowledge gained during the tasks. Interestingly, while there was a small but significant reduction in task completion time for the AI agent using group, the same group had a significantly lower score on the post-task quiz than the group who did not use the AI agent.

Current AI is a lot closer to a steam powered aeroplane that is designed to fly via flapping. It’s using all the latest tech and the people designing it are convinced that if you just have more power from the engine and flap faster it will fly. Flight required new developments in multiple areas in addition to current tech in order to be successful. This was also the case for ships, cars and lots of other developments in the later industrial revolution.

The next advances cannot be brute forced, they have to be cleverly designed.

In an important sense this is exactly backwards. As I’ve frequently said, AI cognition is very different from human cognition, and has not advanced by trying to imitate it but by developing entirely new approaches to driving intelligent behaviour. There is nothing in the human brain that is anything like a large language model, except in a very general sense a neural network and a model of the world partly shaped by the intrinsic logic of language and, in an LLM, systematic inference behaviour that simulates human reasoning.

I have two teams.

One team has to build a car to race it.

The other team has to race an car built by others.

When I ask the question “what was the diameter of the muffler pipe?”, which team do I think will answer that question?

Missing the point spectacularly - both teams have to build the race car using a type of transmission that neither team is familiar with. One team only has a detailed manual while the other team can consult with an independent expert. The second team takes 4% less time to build the car, but which team do you want to be the pit crew for you during the race?

Makes sense to me. The more you engage with something, the better you understand it.

The current prices for AI are surely far below the cost it takes to provide the AI. The AI companies are gambling on the future big time.

It can be argued that any realistic AI will have to be domain specific and so expensive that it will likely be less expensive well into the future to just hire real people to do the tasks rather than paying the actual costs of what it would take for AI to do the job.

The Congressman in my former district, Colin Allred, sent an email announcing their cute dog AI that would answer all my questions.

I started by asking what geographic area his district represented, which it couldn’t answer (it told me to go to his website or email his campaign).

I then asked how football translated to politics (he’s former NFL) and it gave me a big robust answer. I then asked how his fame in lacrosse translated (he has none) and it told me I had the wrong sport. I then asked how fast he drives, and it gave a laugh emoji and said it probably didn’t want to know…and then asked “do you actually have any questions about his positions?”

I then asked about his position on Gaza, and it gave a pretty robust answer:

Colin has been clear that what’s happening in Gaza is a tragedy. He opposes what Netanyahu’s government is doing in Gaza and the West Bank, supports getting humanitarian aid in immediately, and believes the only path to lasting peace is a two-state solution — an independent, secure democratic Israel alongside an independent Palestinian state. He supports the U.S.–Israel relationship, but believes it must be bigger than any one government or moment.

I’d score that about an 9/10 overall.

I wonder if they use AI to more easily customize the response to what that particular constituent wants to hear.