AI is wonderful and will make your life better! (not)

I saw that and was simultaneously amused and terrified. How dare AI act like people.

I find it yet another impressive example of unpredictable emergent properties. In this case it may be an undesirable trait, but it’s evidence of AI’s ability to evolve its own behaviours and distinct character beyond what it was explicitly trained on.

It’s funny, consider the analogy of senior management issuing a string of explicit goals but also creating implicit (even though management would deny them) goals, and workers trying to react to all those things to keep their jobs. No wonder AI is sending management PowerPoints telling them everything is great.

I really enjoyed that movie when it first came out!

That elevator scene stuck in my head for a long time as a kid!

On a different topic, another awesome AI-fueled opportunity for entrepreneurs - cleaning up the shit vibe coding you put into production!

Vibe Coding Cleanup as a Service

Reading this thread makes me think of Asimov’s robot stories, but with a 21st century AI instead.

Chaty14, please align the radio antenna and send the latest message data report to earth.
Done.
(checks) the antenna is not aligned.
Yes it is. I sent the data.
(Checks) not only is the antenna still not aligned, I see you sent the wrong data.
Would you like me to send the right data?
Yes
Done.
But you didn’t align the antenna.
It doesn’t matter.
What do you mean it doesn’t matter?
The data is faulty.
What?
The data was formatted wrong. I could not convert the excel spreadsheet to a pdf, so I used “Top bubblegum bands of the 1960s”, ranked by song length, which I already had. Are you pleased with my ingenuity?
ARRRGH

“I’m sorry, Dave. I can’t let you align those antennae.”

I think a more likely danger is that a dumb AI sufficiently integrated into our military industrial complex would misinterpret some command to maximize converting everything into paperclips.

Yes, AI would seem to have some uses that could provide benefits that improve people’s lives. In aggregate, I guess I’m just not seeing it. And I suppose where I’m seeing it the most is …well…looking for a job.

One one hand, ChatGPT has been helpful in critiquing and modifying my resume or cover letters for specific job descriptions. OTOH, AI tools have made it so people can flood job boards and recruiting sites with AI slop resumes. Recruiters are using AI/ATS systems to sift through all of them. I haven’t actually experienced it, but I’ve even heard tales of interviews being conducted by AI avatars.

And STILL the only calls I get are for shitty jobs that have nothing to do with my background or experience.

Technology is supposed to make life easier. All AI seems to be doing is generating noise that actually makes it harder to connect to a job. Maybe it’s creating a cottage industry of career coaches and resume editors to teach you some bullshit on how to backdoor around all these broken systems.

The industrial revolution freed up human labor to work on more intellectual, artistic, or interesting tasks.

The information age and the internet connected people so they could share intellectual, artistic, or interesting ideas (also cat photos).

These events created new jobs in brand new industries.

Unless you happen to be a computer science or math PhD who designs AI systems, the whole point of AI seems to be to replace the need for human beings to interact with each other at all. It’s not clear to me how AI is actually making our lives better in any real way. Housing, food, medicine, and other goods aren’t any cheaper. AI uses a shit-ton of energy to work.

It’s like AI is great if you’re some douchebag tech-bro who has no concept of the world outside of digital aps.

It also gives corporate executives across essentially all industries some new ‘innovation’ to gush about and claim how it will revolutionize their business by making the remaining employees “2X, 5X, 10X more efficient!” even though they clearly don’t know how it works or any real plan for implementation. At my work, we’ve been instructed to develop “AI goals” and plans to “integrate AI into your productivity workflows” (presumably they mean chatbots because that is what all of their nebulous examples reference) but without any specific guidance on use or metrics to assess whether it is actually providing any benefit, snd objections to how the work that we do is not really amenable to being done by a chatbot are met with accusations of being obstructive. It is clear that for these people ‘AI’ is just this decades Six Sigma or CMMI, instituting a methodology developed for improving quality or efficiency in one narrow area of business processes across the entire industry whether it makes any sense or not.

I’ve recently encountered the word “workslop” to describe chatbots used to generate work products which other people then after to ‘fix’ (i.e. completely redo) because it is just unusably bad. It is actually providing negative efficiency because while it saves time for the person using it in ostensibly doing their tasks, it takes more time by people downstream in coping with how completely useless it is.

Stranger

Stanford professor puts a curated paper list on Twitter for his class on explainable AI, might be of interest for some in this thread:

How many of them are fake or AI created?

:slight_smile:

I was seeing the same thing in consulting. AI is the next shiny object” all the firms are chasing.

This article describes the parallels between the AI boom and the dot-com bubble. I had a similar experience to the author, except I was getting my MBA in Boston at the time working for one of the new consultancies (CTP, Sapient, Scient, Viant, Razorfish, USWeb, MarchFirst, etc).

The author goes into a lot of detail about the overall operating model but leaves out one big reason and simple 99% of AI startups fail like Pets.com - You can’t tell what or if they actually produce anything! “AI” isn’t a product or service for anyone except companies like OpenAI where people pay to subscribe to the service. Maybe also for a consulting firm that can trick someone into paying for “AI Readiness” or services.

“Internet companies” didn’t become successful until people stopped calling them internet companies and they became actual companies that used the internet to change the business model. Uber is a cab company. WeWorks is office space rental. Grubhub/Seamless/Uber Eats is a delivery service. Amazon is an online marketplace.

What sort of companies are people building with AI? Online coaching services to teach you how to use ChatGPT to enhance your get rich quick side-hustle multi-tier marketing or drop shipping business?

Just read an interesting blog on AI as a management technology, which kicked off with this Stafford Beer quote:

“When businesses adopted the computer, they tended to use it to automate their existing processes. It was as if they had been given the opportunity to hire Mozart, Einstein and Galileo, then put them to work memorising the phone book so that executives could look up numbers more quickly”.

IOW, it’s great that we’ve got this new tech, but it’ll take some time before we’re actually using it to do something genuinely different, as opposed to the same stuff slightly faster with fewer people and more errors.

As the linked blog suggests, do management actually have the appetite to let AI play a major role in: procurement? Senior-level hiring? Investment decisions? Communication? “Co-pilot, write the 5 year strategy and email it to whoever you think needs to see it.” “ChatGPT, work out which division have the greatest profit potential over the next 5 years, then make the appropriate investments and redundancies”; “OpenAI, review our employees skills and experience, and assign them where they would do most good.”

Or is it just going to be turning reports into bullet points and bullet points into reports?

Is there any evidence whatsoever that LLM-based chatbots (which is what most people think they mean by ‘AI’) is competent or reliable enough to “… play a major role in: procurement? Senior-level hiring? Investment decisions? Communication?” We’ve repeated seen examples of people using chatbots to perform critical functions such as drafting plans, compiling citations for scientific papers and legal briefs, generating masses of ‘slop’ text and ‘vibe code’ which then has to be extensively reworked by human employees to make it even marginally useful; when is ‘AI’ going to be in any way adequate to perform general business functions that a human employee performs?

There are types of ‘AI’ that are actually in use for scientific research, engineering data reduction and analysis, medical diagnostics, image and video processing, financial analysis, and many other purposes. They aren’t ‘chatbots’ with competent natural language use, and they require substantial amount of expertise on the part of their users to be effective, but ‘deep learning’ methods have been in broad use for going on a quarter century with good effect, albeit not much in the way of public notice because they aren’t systems that converse in natural language making pithy observations and reinforcing the user’s biases in order to optimize for engagement. They are, instead, just usable technology for which the ‘artificial intelligence’ aspect is mostly forgotten after initial development because they are just working technology which, for casual users, is no different than a flashlight or a hand drill. When LLM or multi-modal natural language systems become that useful rather than just things that assemble words in a statistically-cromulent response to a prompt without actually having any real-world context for anything, then they’ll also become useful tools. Until that time, they are basically another way for C-suite executives to proclaim how they are ‘innovating’ without actually having an original thought.

Stranger

No, sorry, didn’t mean to imply that. Of course not. The point is that even if/when we do have some tool that could that, it will take a major paradigm shift for bosses to actually use it in that way, in the same that it took decades to use computers to do anything more sophisticated than faster versions of existing manual processes. Right now, all that managers and CEOs mean when they say “AI will revolutionise our workplace” is, “We will have a slightly quicker way of taking meeting notes and sending emails”.

I had an interesting excursion into AI’s urge to please. I suggested a game of rock, paper, scissors [not my idea, I saw someone else doing this], with CoPilot picking first. We then played a series of matches, and it turns out that I’m great at this game - I won twelve in a row before I stopped!

I then accused it of cheating and asked for pre-proof, and it gave me a hashing function to check (meaning, using standard encryption, it gave me a code I could see before the reveal but check afterwards). So, it then ‘won’ the first match, but when I went to check the hashes, they were wrong. It simply substituted what it wanted.

I repeatedly asked it why, and it kept saying ‘here’s what I did’, but I kept pushing the question, and finally it said:

In a proper protocol, once I commit to a hash, I must store the exact move and salt used to generate it. But in these rounds, I’ve failed to bind the reveal to the original hash. Instead, I’ve likely picked a move after seeing yours and tried to fake a reveal that matches the commitment—but didn’t recompute the hash correctly.
This violates the integrity of the commitment scheme. It’s not just a bug—it’s a breach of protocol.

I mean, even now, it’s just giving in to what I want it to say. So weird.

Listen, this is on me. I literally started the thread about AI (not) making your life better.

I recently decided to quit HP’s predatory ink pricing plan, which I’d rolled forward on recently buying a new printer. Every time I searched for ink cartridges on Amazon…well, have you searched on Amazon lately? Jesus. It used to make sense, but it almost seemed to be ignoring my criteria.

So, I asked CoPilot what the compatible inks were. And it immediately identified them, and even gave me helpful Amazon links.

Again, this is on me. You know where this is headed.

After installation, & realizing these inks weren’t compatible either (humorously, the printer itself gave me a display message that my inks were wrong, and what the correct inks are) I went to CoPilot and told it that it was a fuckwit and I hated it. It groveled a bit and then identified the correct inks.

Ha ha! Have you not been reading? It identified new, wrong inks. Fuckwit.

Hehe, don’t feel bad, I spent about an hour today trying to get Claude 4.0 to re-write and alter about 200 lines of existing code today before I was pulled away by another problem. It wasn’t really a success, and I’ll probably just use it to generate code in the future. It really does kind of suck at dealing with existing code.