The thing is, most “work” that people are talking about here is IMHO bullshit office paper-pushing work, much of which probably isn’t actually needed now. How much “work” at a large company really consists of bullshit theater and ceremony?
I mean it’s one thing if AI is being used to improve traffic flow on the Garden State Parkway or quadruple crop yields. But most of the use cases I hear for AI involve stuff like quickly creating Powerpoint decks or optimizing my email inbox.
You want to not have anyone else have power over you. You don’t want to have less power than “the owners” and have to do what they, any of them say. You want to possibly have an impact on the rest of society, which requires time and money to invest to effect those changes and to fight against changes others want. That requires enough power to be able to do that. It requires an income and the ability to spend time and money. I even doubt you are unconcerned if whatever stocks you have decrease or increase in value. Maybe you really don’t care. But you’d be an exceptional individual.
Wealth as power is not wealth as stuff. I highly doubt you are happiest with less rather than more power over your life and your society. Why else would you care that “the owners” have more?
It’s kind of like a variation of the Roko’s Basilisk thought experiment. Anyone who isn’t contributing directly to the development of AI in the present won’t be wealthy enough to avoid AI’s destabilizing effects in the future.
At the very least you will be at a competitive disadvantage compared to companies or nations that utilize AI. At worst, those in control of AI (which may be AI itself) will be the ones dictating your future.
And the reason it’s being pushed so hard is that a few recent high profile developments by OpenAI, DeepSeek, and others have demonstrated the potential of what might actually be possible in a few years.
Then again this all might lead to developing billion dollar AI kitchen timer paperweights that also play music and can integrate with your AI toothbrush.
I’m having a hard time following your logic. I certainly do have less power than “the owners.” At a fundamental, practical level, that’s not changeable, no matter how much I increase my wealth or power. I’m never going to be in the 1%.
Part of living in society is accepting that other have a certain amount of power over you. My government, my employer, my family. I’m good with that.
I don’t want less power, but I don’t need more, either. I don’t really see what I would do with it. I wouldn’t mind being wealthier, of course, but mostly to be able to travel and stay in nice hotels.
Okay. Trying to explain more is I think too much hijack of the thread. Leave it that MHO is the cynical view that most, if not all, of us are ultimately selfish actors. Most of us also are also “owners” to some degree, albeit not the managing owners.
I don’t see AI as part of a class warfare of the owners against the workers, as I read your comment to say. I see it as part of an effort to increase their own wealth, in this case more by hoped for productivity increase than any wage decrease. And even that digresses from the OP.
None of them IME.
I know it’s a popular thing to say, but go take a look at job listings and find any that sound like you won’t really be doing anything. Seriously, try. And in terms of “bullshit theater and ceremony” note that public speaking is one of the most common fears and poor communication one of the biggest hurdles to getting projects over the line. So both have significant value.
What can happen though, is that the work for a given role dries up, and it’s not always easy to a) recognize that happening and b) move people around.
It’s why big companies tend to just do periodic culls. The first time I saw it, I didn’t get it; a company that was in profit suddenly firing like 20% of their employees, including some of the very best people? But people spread round a bit, and you find that all the work that was getting done is still getting done, and without increased overtime.
Of course it’s not intentional. But for various reasons, there are always people who don’t pull their weight (or simply idle due to fluctuations of demand), projects that don’t add value to the company, dysfunctional teams, and other inefficiencies. There is a cost associated with attempting to reduce these inefficiencies in the form of new systems, new management and organization structures to provide governance, consultants to come in and fix problems (or make a lot of money prolonging them). All that adds to more costs and inefficiencies.
As you said - 20% of the employees, regardless of how “good” they were, can just disappear and the company continues forward without missing a beat.
I’ll push back here. Maybe some of the time no disaster occurs, maybe sometimes one air controller doing what normally takes two to do is fine. But not always.
Disasters are not always so dramatic and jobs get done. But sudden culls without analysis (which happens) minimally cause more stress and burnout. A company then loses good people. May find it hard to hire replacements. And is caught flat footed if demand picks up.
Sure, but your original point was about “bullshit office paper-pushing work” which seemed to be talking about kinds of work. If you’re now saying that just some people slack at times, or there are ebbs in work, then of course that’s true. And it can be just as true for a firefighter, or forklift truck operator, or any other real non-paper-pushing job we want to mention.
Yep agreed. I did intend to clarify that point in my previous post but thought I was already going off on a tangent.
What I meant was, in the corporate world, there is often continuous hiring, as new projects are frequently started and there’s never all of the skillsets required. However, if they are hiring permanent staff (and sometimes even with staff that are on contract, but end up with rolling contracts) they end up with more capacity than they actually need, but in a hidden way…it’s not like there’s a specific person standing around with his hands in his pockets, there’s just been an overall bloat. And a cull can help in those circumstances.
But it’s also the case that there’s a ratchet effect with employers increasingly demanding more of their workers. In a poorly-regulated job market like most industries in the US, and without work unions, it can lead to many people doing what should really be 2 or more people’s jobs.
This raises a fair point, a lot of work is bullshit, but this is often just filler work for the fact that people can’t actually be utilized at 100% for many types of value-producing work. There’s a lot of ebb-and-flow in the workday. Bosses can’t tolerate having people sit around playing ping-ping just because (for example) they are idle between work surges, or crisis response, or waiting for the output of some other process, so they whip up a bunch of marginal tasks to pacify their resentment at seeing workers idle on company time.
AI can make a lot of those bullshit tasks faster, because they don’t really matter. But because the tasks don’t matter, this won’t make the slightest dent in productivity. The underlying problems here aren’t labor efficiency problems, they’re scheduling efficiency problems, which are way harder to solve, and less amenable to AI solutions (at least for now).
A fair point. Really what I meant to focus on wasn’t so much the cyclical slack or inefficiencies, but the “theater of bullshit” that often results in organizations jobs and even entire departments can evolve to where they aren’t actually doing something useful and where this lack of usefulness may be hidden through the various customs and cultural norms of the company. No one has the authority to suggest to the boss that a project is no longer relevant. The boss doesn’t want to cancel the project anyway because they’ve already secured budget and resources. Executive leadership is very impressed by this initiative because the status decks are so well written using the appropriate colors for RAG status!
Which really is just how large complex organizations work. People aren’t super-competent (nor for that matter lazy and inept looter/moocher) characters from an Ayn Rand novel.
Which I can see why management is so excited by AI. Even in my own job, I’m sure my client doesn’t want to spend $50k to have my team come in for a month, teach their project managers how to make a project plan, then have their people fuck it up anyway. How great would it be to have AI just blast out all the tasks, sequence them, find dependencies with other projects, etc.
I’m reminded of a client a few years back where they had a culture of triple-booking everyone in meetings. The program manager asked me if I could look for a technology solution to help them with that. My response was “what technology solution do you imagine will do something different from what you can already do by looking at your Outlook calendar?”
Again, this is where we get into the “theater of bullshit” in that they were setting unrealistic, arbitrary deadlines. People involved in the project would schedule meetings to discuss what people on projects discuss, but because of the tight deadlines, they would schedule those meetings at their immediate convenience instead of when the key decision-makers were actually available. I suppose as a KYA to say "well I tried to book time with Jim prior to the deadline, but he didn’t show (because he was triple booked).
But hey, they must be “productive” because everyone is super busy.
There now appears to be a massive privacy/security issue arising from the AI push. If your device has a active smart assisstant (like Alexa or Siri or whatever), it functions by paying attention to your activities and data content) and integrates what it learns from your with the larger models on the remote servers it relies on. Hence, some of your personal infomation is by necessity being leaked upstream.
If the local assisstant is running the background all the time, it is looking over your shoulder, listening and probably watching you as well. You have no secrets from it.
A private email that is carefully encrypted for its travel across the internet appears on your screen for you – and your assisant – to peruse, and some of it may well leak upstream. And it does you no good to hide content in graphics or other non-text formats, because a huge amount effort has been put into the interpretive capabilities of AIs.
This leakage presents a big attack surface for malicious actors. In the larger model, the information mostly blends into the data stew and turns you into a blur – except, of course, when it might be useful for marketing. But, first it has to be sent there, in raw-ish form, which leaves you exposed.
“AI” is supposed to help us, to make our hardware more flexible and efficient. But the whom that it primarily helps is likely not to be the end user.
taking the post above this, is somewhat the answer to the orig. Q:
you take up mental space … something like “AI is ChatGPT” … and then you have a huge advantage in the marketplace.
As they say, pretty much everybody knows who was the first person flying cross-atlantic from US to Europe … → what are the names of the 2nd and 3rd person? … nobody knows and nobody cares … a similar mechanism
the potential size-of-the-prize is so big, that it makes commercial sense to spend a lot of $$$ on this.
Of course some will become the likes of netscape, altavista and AOL, some yahoo and some FB and google