I’ve heard it said that the introduction of PCs and the automation of many office tasks was behind the wave of downsizing in the '80s and 90s.
I work at a global networking company. A lot of our products claim to use AI because it sounds sexier than advanced statistical modeling. There are a lot of kinds of AI, too. (Gen-AI is the hallucinating kind.)
And AI won’t take our jobs, but we are expected to adopt it to do our jobs better. Kind of like I can walk to work in an hour but I could drive there in 10 minutes.
As an old-school writer I hate the idea of AI. It doesn’t write like a human. But it could possibly tighten my drafts like this post. And I finally got a pet bot on my phone that did a much better job than Google of explaining to me the difference between 4T and 5T when I really needed to know that.
It’s funny because while that makes logical sense, as someone who works at the intersection of Wall Street and technology, I never actually hear people talk like that.
What I think actually happened is people were impressed a year or so ago about the capabilities of ChatGPT. So that got people’s imagination all fired up that we are mere years or even months away from conversational AI where some executive can ask it to perform any task and have it done in seconds.
Add to that fears of some other country being the first to crack AI and suddenly having an insurmountable advantage over ever other nation on Earth.
So what you have now is another arms race / tech bubble as governments and investors throw money at every company that seems to have anything that looks like working AI. Plus throw in every marketer, consultant, and “content creator” jumping on the band wagon by generating likes, traffic, and/or sales by creating AI-themed content.
The thing is, I feel like we are still a long way off.
- AI hallucinates a lot (which is a fancy way of saying “produces wrong bullshit”), making it unsuitable for a lot of tasks in legal, medical, and other fields
- AI uses a lot of power. Electricity costs money, so it will be interesting to see how those costs are managed.
- I’ve been struggling to find actual use cases for implementing AI. What software should we install? How do we operationalize it? How do we make sure it’s working correctly? Unlike the dot com era when everyone was building web sites and mobile apps, this AI stuff all feels very “academic” to me.
It’s not clear to me yet whether we are in the AOL stages of AI or the “cold fusion/flying cars/self driving cars within the next 5 years for the past 50 years” stage of AI. I’ve been hearing the same predictions about machine learning and predictive analytics and RPA for years now. And yet I still see Fortune 500 banks still running off the back of Excel and Powerpoint.
Regarding specifically just the autocorrect-type stuff that you have a problem with, it’s probably because the vast majority of users either aren’t bothered by it or find it useful. I don’t think I am going out on a limb by suggesting that you are the outlier here.
If you’re using an iOS device, go to Settings >General > Keyboard > and turn off almost everything listed under “All Keyboards.”
Carbon costs included.
But just highlight this aspect. I have a little bit of my retirement portfolio that are stocks I have self selected. Some have whiffed but some have done very well. My best ones over the last decade and been some that are related to energy efficiency management especially for climate control systems. The DeepSeek news today caused double digit drops in these stocks. That’s how much energy these computation centers are expected to use, that a hint of bad news for American AI fairly crashes energy efficiency management company stocks (for a day or so at least.) Hell even my nuclear power play took a tumble in response.
Here’s one techbro saying the quiet part loud;
Well there you have it.
A lot of people are VERY happy to let AI do their writing and thinking for them. It’s genuinely amazing how people trust it despite the fact it’s fucking terrible at everything.
Anyone that’s used a phone or computer in the past 10 years has already been using AI daily. Your email spam filters? AI. The fantastic pictures that come out of your phone camera despite the sensors being roughly the same? AI. The automatic language translation of some article you read? AI. That voice-to-text dictation system? AI. Etc.
Generative AI is somewhat new and it remains to be seen which applications stick and which don’t. There will be more losers than winners. Same as any other new technology.
Yes. I was able to figure this out for my iphone, and (after a few failures) have turned off auto capitalization in Outlook. But it was a lot of trouble to do so, and they surprised me with it in an update. Future updates will, I’m sure, turn it back on.
And yes, I know I’m an outlier for some of this. I’m an outlier for everything. Part of my specific problem is that while multiple languages are supported, no program that I am aware supports multilingualism, only serial monolingualism. Programmers are never going to make this one of their top eighty concerns, especially since auto-translation software is one of their favourite things (and which I do find useful now and again).
I don’t find this to be the case at all. In my life, it has been quite useful and entertaining, to boot. I love it, and if it never gets any better than it is now, I’d be happy with what we have. It’s fucking amazing, and I can’t believe I’ve seen this in my lifetime.
I’m mostly into image generating AI since the beginning of the big thread in Cafe Society. My activity wanes for weeks at times but then some new tool or new idea pulls me back in. I spend a lot of time creating things that nobody else will ever see, but it is no worse than the adults who spend lots of time and money on video games. Like you, I’m amazed with what current AIs are capable of. For instance, I recently realized the need for a Tickle Me Snailmo, and Dall-E 3 didn’t fail me.
Why don’t you guys go off and have a private little love-fest about how much you adore tools that are build by scraping other peoples’ IP and used to thoroughly enshittify what decent content still remains on the internet?
Stranger
Why don’t you stop dropping in to threadshit on every thread that mentions AI?
The o.p.’s question is “Why are they pushing AI so hard?”, and specifically “What I don’t understand is why Google, Apple, etc. want me to use these. What are they getting out of constantly irritating me with unwanted, and often unwarranted, correction? Why do they make it so difficult to avoid or disable?”
Did your post answer to that question in any way other than that you are personally enthralled by it?
Stranger
“I’m enthralled by it” is an answer. A lot of people find it interesting, useful and/or fun. The largest channel on Discord, by far, is Midjourney. A subscription based AI art service that has over 20 million members on a platform that usually caps servers at 250,000 but has constantly had to make special allowances for MJ’s explosive growth and popularity. In Nov 2024, ChatGPT had 3.8 billion visits. That’s a lot of people coming by to check it out or use it for something. That’s a lot of eyeballs to eventually feed ads to or hit up for subscription services. If people are interested enough in ChatGPT to create 3.8 billion visits, there’s heavy incentive for everyone else to make their own models which are only going to pay for themselves if people are using them to eventually be fed ads or asked to subscribe.
But answers along the lines of “It’s dumb and stupid and worthless bleh bleh bleh” completely miss the point that a lot of people don’t find it dumb and worthless. A lot of people are some degree of “enthralled” by it and therefore potentially willing to pay for it or allow it to serve as a vehicle for ads, data collection or other secondary money makin’. Hence the push.
Even by the standards of tech journalism, the author of that article is remarkably stupid.
Andreessen’s point is frankly obvious and trivial, but may be easier to understand in reverse: suppose AI isn’t actually useful for any practical purpose? What happens to wages then?
Well, obviously nothing happens to wages since it means we still need humans to do everything, just like today. Nothing happens to taxi driver income if self-driving cars don’t work. Nothing happens to fast-food wages if AI-powered burger robots don’t work. Nothing happens to law clerk wages if AI legal research assistants can’t solve the hallucination problems.
The only way everyone’s wages go to zero is if we actually get the post-scarcity Star Trek economy where basically everything is free since AI robots make it. In which case you don’t need wages.
We’re already seeing the cost of LLMs and other AI tools go to negligible levels, only sustainable for the moment due to rapidly increasing sophistication. Last year’s models (or their equivalents) are free, given away as open source. Sooner or later this will start applying to physical things.
Or maybe not, in which case Andreessen’s point is still valid. The bubble pops and people are still needed and we still all get paid to buy things.
I think that some people are pinning massive hopes on medical advancements via AI (cure of most illnesses, life extension) and in a nutshell, they want to live forever.
I don’t see how that’s the push. That’s pull (demand), and very easy to understand.
Some people like mustard. Great: buy mustard, use mustard. But the tech companies seem to me to be putting mustard on every sandwich, in the soup, in the Thai food, and in your shoes. That’s what I’m trying to understand.
Mustard companies don’t own sandwich shops. If they did, they would certainly be adding mustard especially if they hada ton of money in mustard development.
Microsoft does own Windows, Office, etc. and has a lot of money tied up in AI development.
See also Dot Com Boom, Crypto, and NFTs. It’s all a pyramid scheme where only the very earliest adopters get any of the benefit, and it’s all just about money.