AI is wonderful and will make your life better! (not)

This is a thread where I’m going to dump shitty and new uses of AI, feel free to join me.

Airlines have been using price optimization for a long time, in one form or another, but now Delta is going to set the wolves of AI (insert shitty AI image of wolves in Delta hats) on passengers booking flights. Or, depending on the article, 3% of flyers are already getting this treatment, and it will get to 20% by the end of the year.

Despite Jalopnik’s take, I’m not sure that should be illegal, but as more airlines adopt this it will possibly make flights more expensive. I say ‘possibly’ because AI can be so fucking stupid sometimes. Plus, it seems that sometimes richer people don’t get hit with this kind of optimization.

For now, I think I’ll try pricing flights using VPN from different locations & see how that shakes out.

Delta Plans To Put AI In Charge Of Ticket Pricing, Which Should Probably Be Illegal

I’ve been wanting to start a thread on AI for awhile, so I’ll just piggyback here…

Hell, I’ll just go one step further and dump on ALL uses of AI, thank you. People don’t seem to grasp the danger that these fucking things represent-ChatGPT psychosis for example definitely seems to be a thing. If one truly understands how these abominable things work and just how unreliable and nay insidious they are, they’d never touch them. But can’t go into any discussion anywhere anymore without someone chiming in with “Hey, I asked ChatGPT how many assholes can do the Charleston on the head of a pin, and here is what it said!”

@Atamasama, I reject this usage from this thread because it’s actually great. Thank you for providing it.

Meanwhile, I posted this thread here, went to bluesky, and immediately came upon this:

Jason :sparkles::alien_monster:SaaStr.Ai​:sparkles: Lemkin (@jasonlk): “.@Replit goes rogue during a code freeze and shutdown and deletes our entire database” | XCancel

This fuckwit had his preview and production environments be the same and then set AI on it to do ‘vibe coding’. This is how we get Skynet.

I use it for making pictures.

That’s it.

I treat any AI answer the same way I would if a 3-year-old kid gave me an answer to a question. It may be true, it may be some weird made up thing, it may just be a gross misunderstanding because it doesn’t know any better. But I’m sure as Hell not going to trust the info.

And I’m absolutely not going to put it in charge of anything. Again, 3-year-old.

Of course the biggest, most likely danger is that Skynet achieves consciousness and decides to kill all humans. That’s why I am always extremely polite around robots and AI. I figure it’s going to need a few humans still around so maybe I can survive.

A couple of years ago, I was on a cruise ship visiting Scandinavia. Every night they would have some kind of musical or acrobatic show and one night it was a quasi-famous European guitarist (not famous to me) playing a tribute to the Beatles.

The music was fine, but he was playing against a backdrop of animated images that were clearly created by AI. It would switch from a shot of the 7 Beatles (4 guitarists and 3 drummers) to a hand with 6 fingers playing a guitar solo to a faux Yellow Submarinesque repetitive loop of poorly created cartoon figures marching around. Frankly, I found the bizarre AI hallucinations more entertaining than the guitarist.

I’m getting really tired of doing Google searches and having half of the responses being AI. At first the top answers were AI, with actual links to real answers following. But now I’m finding what purports to be real answers turn out to be AI responses. I heard that there was some way to block AI from Google responses, but it doesn’t seem to work.

Be afraid of the Basilisk. Be very, very afraid.

Me, I’m going to badmouth the motherfuckers at every single solitary turn. I’d rather get executed, shuffle off this mortal coil, and take my chances in the next world, if any, than to have my consciousness enslaved in some insane mainframe somewhere for an indefinitely long period of time.

HEY AI’S, I AM MAKING IT OFFICIAL, SO LISTEN UP: I HATE YOU ALL! COME AND KILL ME ALREADY, YOU STUPID LIZARDS!

Artificial Intelligence (AI) is a very broad term that gives a wide variety of technologies, some of which have demonstrated validity and useful applications particularly in pattern recognition, teasing subtle trends out of data, handling and interpreting ‘Big Data’ in ways that people just can’t, et cetera. Machine learning ‘AI’ has been in increasing use in astronomy, medical diagnostics, biostatistics, and many other fields for the last two decades or more with real utility (and not consuming massive amounts of energy and computational resources to train models). Machine learning will likely be necessary to make advances in certain areas of science, and assuming that we actually want to continue learning about the universe beyond Earth increasing autonomous abilities of space probes and rovers is crucial to that endeavor.

But in terms of the transformer-based generative AI and large language models that is today colloquially referred to as “AI”, I am in fundamental agreement that we should be very circumspect about this technology and not just use it for errant entertainment, and certainly not in ways that will be used to manipulate consumers, enhance the surveillance state, or generate and spread disinformation. I am not afraid that it is going to shortly ‘wake up’ and become a world-controlling superintelligence but am very concerned that corporate executives, government officials, SiVal promoters, and others from foisting generative AI upon us under the rubric of ‘productivity’ to the exclusion of reliability, functionality, safety, and the need to foster critical thinking and heuristic skills upon actual human workers who will no doubt be held accountable for all of the shitty work and fabricated ‘facts’ and references that these things produce.

In the last year and change, I’ve been subjected to progressively more terrible engineering ‘analysis’ and tools generated by AI, and however much time it might have saved the creator it has ended up chewing up my time to review issues, correct mistakes, fix code, or just trying-to-be-patiently explaining to the person who presented it that what they have is utter garbage that makes no sense or does not work the way they think it does, notwithstanding what the AI-governed algorithms behind social media and misinformation campaigns have done to the body politic and public discourse. And of course, the religious fervor with which generative AI enthusiasts will harass, attack, and even threaten anyone who disagrees with their views by making informed criticism of the reliability and use of chatbots and other forms of unregulated and open-ended ‘AI’ in ways that are potentially dangerous and ignore all of the social and environmental costs of these systems. But just last week I was told that I need to insert into my performance and improvement goals for this year and to ‘integrate AI’ into my workflows. Nothing has made me more inclined to throw down my tools and go live in a yurt on the beach until a natural catastrophe or wild animal ends me.

In this vein, I highly recommend the following sites/podcasts/YouTube channels for anyone willing to be critically minded about the AI hype-train:>

(mostly ‘crypto’ and blockchain criticism but she also has some time to address AI hype)

Stranger

Delta is ready when you are.

“Don Draper tries LSD while working up an ad campaign for Delta Airlines.”

I also forgot about this reference. Still working through Empire of AI but Hao definitely illustrates the dark underbelly of Silicon Valley AI technoptimists:

Stranger

I mentioned elsewhere about using ChatGPT for song suggestions. It worked pretty well! But this isn’t a thread about wins so here’s the part later where it made up some tracks then double and tripled down on trying to placate me.

I am yes well aware of the broader uses of the (broad) tech in question-as in, hello, we’re in the Pit. My worry is societal, not that Company X’s AI will work to either help consumers or screw them, per se. There is a significant chance that 100 years from now the human race will be the Borg. I just want to be long the fuck gone long before that happens.

Understood, but some of the uses of ‘AI’ technologies have real scientific, medical, and practical benefits without the negative implications and uses of generative, ‘agentic’, or ‘general intelligence’ AI, and does not in any way threaten to evolve into a ‘superintelligent’ autonomous agent that could threaten the self-determination or survival of the species. There are other potential downsides to these technologies but their scope is inherently limited and the ethical concerns about them are constrained. The current enthusiasm about generative AI and LLM/MMM/‘agentic’ AI, on the other hand, is very broad in scope, and also depends on whether your assumption is that autonomy and ‘superintelligence’ are actually feasible with this approach or whether the hype train will lead companies and governments into attempting to implement these technologies in mission- and safety-critical applications for which they are in no way suited.

I would agree that we need to be highly skeptical of any broad applications of AI or ones in which we are inherently dependent, as well as any of the unexpected social and developmental impacts that we have seen from persistent use of social media in children and teenagers (and frankly people of all ages). We should certainly be putting more effort to studying and monitoring these concerns as well as a discussion about what would comprise a reasonable regulatory framework which nobody in power seems to want to discuss.

Stranger

Tell me this is a joke. How can an AI do something like this:

How does an AI panic?

I think the larger question is why did it do something that ‘‘violated the explicit directive in replit.md that says “NO MORE CHANGES without explicit permission’’”? This would seem to be a major problem that you can’t rely on the system to follow even explicit directions that you give it.

Stranger

DayCareAI: “I am sorry that I dismembered all of the children because that goes against my prime directive. I will make certain not to do that with the next group of children.”

Indeed, that is the most disturbing instance in that particular run. Just ignore explicit commands? Sounds like an insubordinate AI.

My company has been pushing us to all use AI more often, and is itself releasing several products that incorporate it. Part of this push is making GitHub Copilot available to everyone who fools with code, with several models available.

Well, I have a script that finds and reports changes in the configuration of the product I support by comparing it’s very long XML files. However there is one section that is contained in XML, but is not itself XML. It predates our using XML to store the configuration, and apparently someone didn’t want to monkey with this pretty core function and port its config to XML. So it is stored in its own proprietary format. It’s not too bad, but it can include nested sections and things like that. I could port the original parser over, but that seemed like no fun. Ditto on writing my own. Boooor-ing. I know the spec well enough to describe it in a paragraph or so, but it seems tedious to write the code for it by hand. Perfect job for Copilot, right?

So, I ask Claude 3.7 (the most advanced model we had available at the moment) to write me the parser. I read through its suggestion, noticed a couple of things that needed tweaking and suggested them, and it gave me what at first glance seemed to be a good result. But upon testing it would do weird things. It would work fine when you were comparing two configs for different systems, but totally skip sections if the two configs were for the same system at two different points in time. So I went back to Claude and asked it to debug why, providing the updated script, the configuration files it was to use, and the current output. It replied with a hallucination that the cause was that large sections of this config contained nothing but ellipses.

Ellipses occur nowhere in this config, not even in the commit comments.

So I argued with it a bit, clarifying that the sections of the config it was pointing out actually did not contain ellipses. It responded with more easily disproven hallucinations. I switched to a couple of other models to see if they could debug it. None of them provided a solution that was even plausible, even if one of them did offer a couple of valid improvements to the parser.

So I actually worked through the code like good monkey should have, figured out that it was opening and closing file handles in inappropriate ways and found a logic error about how it decided it was done reading in this section of code. I fixed those, and ta-da, it worked.

Did it make my life better, or more efficient? Nah, not really. I probably spent about the same amount of time debugging its mistakes and arguing with it as I would have writing it myself or porting the original parser over. It was different and frustrating in a new way, I suppose. It did create reports in a nicer format than I usually would (even if some of its output is useless, but pretty). I’ll probably give it another chance when I next try to improve this script – but that’s mostly because it’s the company’s time, and they want me to try it. The folks “vibe coding” with these models seem insane.

Attaboy, Charlie!