This is true. It’s full of flaws, but it’s improving all the time.
In my view the biggest risk of AI is not in the tech but its misuse. In my company it’s giving a turbo boost to the biggest bullshitters in the company. It helps them craft even more plausible bullshit at a rate that saturates people’s ability to comprehend and consume it, so their own preferences and biases become cemented by the halo effect of the plausibility engine.
Moreover, software coding is tipping into a crisis point. Manager-types look at the $500K compensation of senior engineers, which motivates them to think they can save a shit-ton of money by getting juniors to code for a fifth of the cost. They proclaim on LinkedIn things like “If your worth is based on typing source code, your days are numbered.” But the problem is that code entry isn’t what these engineers were paid for. They were paid for their judgment in knowing what to build, to ensure that it was built correctly, and to understand what happened when it breaks. In fact one of the most valuable skills of a software engineer is deleting code, because code is a liability that adds risk and confusion.
As a result, some severely flawed code is getting released to production, causing breakage, outages, and security incidents. Companies are now bolting-on review processes where senior engineers are to review and catch the mistakes before they’re released, but having a senior engineer try to inspect and understand AI slop is way more costly than having them write code that they review and understand as they go.
Anyway, that’s a long-winded way to say that the biggest problems with AI are more about the flawed ways that humans are interacting with a plausibility engine that they believe is a truth engine, as well as misunderstanding the fundamentals of the work that they want to automate away.
So far, it’s more of an annoyance than anything else, as it mainly impacts me when searching using Google or Bing. I want SOURCES, not some (often inaccurately) summarized info gleaned from a bunch of places.
That said, I HAVE used it, and recently, to generate an image I specifically wanted. Not work-related, but a humorous picture that I may turn into mugs or T-shirts.
I thought it was going to go this way also, but it appears to be going the way of keeping the senior engineers and not hiring the junior ones who are less likely to be skilled enough when they start to beat an AI coder.
The Times had an article today about how AI is producing so much code that it has been almost impossible to have enough time and people to check it. The most subtle bugs are logic errors not coding errors. An experienced programmer has learned how to anticipate all the cases. Can an AI do that? Will prompts always be complete enough to make sure the unanticipated inputs and cases get handled correctly?
Yeah, there are layoffs now, but those aren’t really from AI, but from companies dumping over hiring or dumping mistakes, like Meta is cutting their failed VR program.
I wonder who is going to get the skilled to do the analysis of a problem that senior people do today?
Different parties have different ideas about who’s going to get cut. Management is very certain it’s going to be the most expensive seniors, seniors see no more need for juniors, everyone wonders why we have managers if anyone can generate a wall of bland featureless business jargon.
I see some value in getting proofs-of-concept published faster, demos are always better than memos. But it’s also possible that the value will get washed by all the noise being pumped into ecosystem. It’s going to be in flux for a while, and it’s possible AI will help entrench some of the most useless practices and procedures that currently exist.
It’s actually pretty amazing. Yes, there is a lot of slop out there. But it’s actually pretty amazing that within a few minutes, I can take a photo and more or less animate it however I want (within certain constraints).
I also used ChatGpt to convert like a thousand file paths in seconds. Little stuff like that is pretty useful.
I’m talking about what we are seeing, not what is expected. Hiring of new CS grads is way down. I was at lunch the other day, and the husband of one of the attendees is a CS prof. It seems enrollment in CS programs has plummeted.
You’d think they would want to get rid of the expensive people, but I can tell you as a former manager that it is far easier and better for morale to stop hiring versus fire and hire.
There’s been a lot of talk for 15-20 years now in my former industry about how to eliminate the junior “apprentice” workers and just keep the senior high experienced folks, perhaps assisted by AIs or by remote apprentices who’re serving a dozen seniors intermittently by datalink as needed, not just one continuously by physical presence.
What no one has ever seemed to give the slightest thought to is where those senior experienced folks with all the insight and judgement came from. And how a system with no input from the bottom will have anyone left who’s qualified to step into the shoes of the retiring seniors as they inevitably age out.
Hearing this the talk of the cratering demand for junior devs and CS folks gives me the same shivers.
That’s been going on for a while though. Part of it is the commoditization of coding that pre-dated AI, combined with the fact that the perception of coding as a “gold rush” skill caused a glut of interest in coding. Just as the biggest part of that pig is emerging from the python, we’re hitting the inevitable regression to the mean following the mad post-COVID hiring binge of 2022.
I can tell you that from my experience, working for a company you know and hate, which produces a high-profile AI tool that you probably also know and hate, almost all of the high-profile job cuts we’re making are announced as resulting as “AI efficiency.” We have to say this because this messaging is incredibly important to the profitability of our AI product line, but in reality we’re cutting tissue that was already dead or dying. Products that were already dead before AI, people who were marginal performers or otherwise had a target on their back before AI, and unwinding ill-advised dead-end projects that were taken on after the previously-mentioned 2022 hiring binge resulted in a glut of surplus headcount that got assigned to the wrong places right before AI mania hit. That’s part of why our product sucks so much. We had pre-hired the wrong skill sets and abruptly threw them all into the AI tilt-a-whirl, with poor outcomes for all involved.
In reality AI is causing almost as much inefficiency as it’s reducing, at least for us. There are strong local optima for people who know how to use it well, but it’s mostly helping advance their personal success, and it’s not translating to end-to-end efficiencies. That could be more due to our historical & cultural corporate quirks, it might work differently in other companies, but in theory we’re supposed to be industry leaders.
So from where I sit, the causes of the tight coding/IT market right now have way more to do with regression to the mean, plus AI causing some healthy natural organizational shifts causing dead weight to fall out. It doesn’t seem like AI is outright replacing skill sets as much as causing macro changes in what we work on (in our case AI itself), as well as simply an excuse to accelerate changes that were already coming due.
Sure, the gutting of Meta’s VR department isn’t the result of AI or the economy, but because Zuckerberg screwed up, and can blame it on AI. Ditto layoffs to remedy over hiring.
Things dried up for new grads when the Bubble collapsed, but this is different because the companies are all in good shape - for the moment. And back then people thought they could get hired for software jobs as long as they could spell “C.” But from the stats I’ve seen this one looks different. Plus, new college grad hiring freezes are temporary, and so I’d expect in normal times they wouldn’t affect people entering the program so much. Sucks for the new grads, though.
My old company, another one everyone hates and for good reason, had two hiring pools, one for new grads and one for experienced. It was much easier to hire new grads, even expensive PhDs.
I’ve come to the conclusion that most corporate jobs are bullshit anyway and it’s all really a form of UBI for the highly educated posing as a sort of theater of productivity.
I guess “so what?” Zuckerberg had billions of dollars so he tried something new and it didn’t pan out. It happens. The people who went to work for him understood that and I’m sure most of them will land on their feet. I mean he can blame it on “AI” in the sense that maybe he should have pivoted Meta to AI instead of VR.
I would assume very much so,
I posted a link to the long “What were you thinking?” thread and asked ChatGPT why a specific poster was a jerk. It gave me a response with specific supporting quotes and posts. I opted to not post the results because some people seemed sensitive about AI content (particularly one poster who was being a jerk, according to ChatGPT).
Interesting. Instead of singling out one poster, I asked ChatGPT who the jerk in the thread was, and it described it as “a rotating cast.” ETA: I will admit when I asked who was ‘most obstreperous’, it did single someone out.
For context, a procedural / reporting issue was raised at our company. So for the past few weeks my team and I have been on this treasure hunt interviewing different stakeholders, building process maps, basically trying to figure out how the company actually works. Well it got to a point where it mostly felt like trying to pull a knot out of a sweater and the whole thing unravelling while at the mean time we can’t actually figure out who, if anyone, actually gives a shit enough (at least enough to allocate funds for us to keep doing this). So basically my boss just shut it down so we can work on more important stuff.
Anyhow, for fun I just tossed all the raw notes and emails into our AI tool and told it to create a Mckinsey style Powerpoint deck.
Then I asked it to help me write a prompt to create a Mckinsey style Powerpoint deck.
The results are actually pretty good. At least as a starting point.
I haven’t had much luck with the formatting though. Ah well.
Sometimes I like to fuck with the AI, just to see what it comes up with. Like "recreate this deck highlighting the key risk of our Data Privacy Officer was turned into a gorilla from eating a non-compliant banana. It just said that was funny but not a likely regulatory risk.
Copilot (we have enterprise version for the company)
It was mostly a bunch of high level bullet points. Even after I told it to act like it didn’t want to be stuck working late on it while the rest of us went to happy hour.
Well, by telling it not to work late, the AI might have taken you literally and decided that “high-level bullet points” were the fastest way to clock out, lol.
I may work on it some more tomorrow to see if I can get it to make what I want. Really I want two things.
Take an existing PowerPoint deck and make it into a generic template I can use.
Go through all these pages of notes and scribbles and summarize it into an actual PowerPoint deck our COO can ignore.
Separately, as I mentioned, I’ve been playing around with video generation software (Kling 3.0 to be precise). Mostly fun stuff like taking an old picture of our Halloween jack-o-lantern and having it eat a table, my kids joyfully firing a Bofors 40mm antiaircraft gun on the USS Intrepid, or their Lego creations doing stuff.
Some outputs looks better than others. A lot of it depends on how good the prompt is, whether I use reference images and how I use them, how complex the scene is, etc etc.
One thing I’ve learned in my research is that most AI models will try to put up some sort of guardrails around how violent or sexualized creations can be. Not like I’m trying to create hyper-violent porn or anything. But what I found in my conversations with ChatGPT is that they will allow you to create like crime dramas and war movies, but not like Quinten Tarantino or Saving Private Ryan levels of gore and violence.