Because they hate having to pay employees so much they are willing to put up with inferior performance to replace them. The same way they’ll do things like fire experienced employees and skimp on training so they can pay employees lower wages, even though it hurts their business. Corporations hate their employees.
An awful lot of corporate white collar work is pretty repetitive in a narrow domain. But not so repetitive and narrow that it can simply be reduced to a conventional algorithmic computer program. That seems to be the type of thing that might be well-suited to an AI trained on the specifics of tracking down lost packages or reconciling expense account submissions to corporate credit cards or whatever.
You were in IT for decades. You certainly worked through the big craze to fire most of the local software development staff and outsource all the work to India and later other countries where an ace software developer was paid a couple dollars per day, not a couple dollars per minute.
CEOs, CIOs, and VP’s loved that shit. How well did it really work out? Notso hotso IMO/IME. What was your experience?
I think the current AI craze is totally in the peak of the hype cycle and everybody has to publicly get on the bandwagon or seem like they’re dooming their company to irrelevance. Some spectacular good results and some spectacular bad results will ensue. And at some time in the future, AI will be past the hype hump and will be delivering real value to the shareholders. Sux about the freshly unemployed, but they don’t matter in a capitalist society.
More than half of the books listed were fake, according to the piece’s author, Marco Buscaglia, who admitted to using AI for help in his research but didn’t double-check what it produced. “A really stupid error on my part,” Buscaglia wrote on his Facebook page.
…
It’s the latest instance of an AI shortcut backfiring and embarrassing news organizations. Sports Illustrated was caught in 2023 listing nonexistent authors for product reviews carried on its website. The Gannett news service had to pause an experiment using AI for sports stories after errors were discovered.
The dirty secret about generative AI using LLMs to create work products is that there is really no way to ensure their reliability in a global sense. Various post hoc filtering and verification methods have been applied to try to correct for basic factual errors or using retrieval-augmented generation to pull factual information from a verified source, but in creating a general purpose algorithm for producing interpreted content from a large corpus of training text there isn’t any way to assure that the machine understands fact from fabulation because it doesn’t actually ‘understand’ anything at all from a cognitive sense of making conceptual models of the world; it just builds statistical models of the most probable series of tokens to generate a syntactically-cromulent and authoritative-seeming response even if it is total nonsense, as your post I linked to above was.
As it happens, I’ve been dealing with a contractor which has apparently been directing engineers to use generative AI to produce written analysis reports, and as with the calculation linked to above, it makes repeated, obvious, and self-contradictory error which would be easily caught if the author of record would do some bare minimum of due diligence to sanity check the result. (And I do mean “the bare minimum”, like checking if the answer is on the same order of magnitude as the expected result or if the units make sense.) But they don’t because the point of using these systems is ‘efficiency’, and if the answer is off by a few orders of magnitude, what does it matter, amirite? It’s a fucking stupid waste of time being driven by C-suite execs who literally have no idea what ‘AI’ is, how it works, whether it can be made reliable, and what they are losing when they cut their workforce of people with experience and deep technical knowledge in order to replace them with chatbots. All they know is that they saw Sam Altman or Jeff Dean or Mustafa Suleyman on CSNBC saying how any company that isn’t adopting AI as fast and as extensively as possible will be ‘left behind’ as if they are going to miss the Rapture, and indeed speak of ‘AI’ with the fervor of religious fundamentals because it is literally a fucking cult and they are trying to terrify everyone else with a ‘fear of missing out’ because they know they can’t actually entice the with a real business case that would justify the massive investment, loss of real productivity, and a necessary degree of reliability for any kind of mission-critical task,
There’s a problem with absolutist positions like “LLMs just parrot gibberish” (your statement from another thread) because such observations focus on some narrow fault(s) that may or may not even exist any longer, are not necessarily indicative of any intrinsic problems with the technology, and miss the big picture of AI’s broad utility. Of course there’s also the danger of being an overly optimistic evangelist, like many AI researchers were in the 60s. But we’re in an entirely new era now, and I think it’s the naysayers who will be proven wrong.
Here’s an example where I’d welcome your criticism of an LLM response and how this comports with your position of generative AI being useless. A knowledgeable poster on this board, who I believe is a professor of computer science, in a discussion about AI and specifically about its level of confidence, wrote a fairly detailed analysis about the inherent weaknesses of LLMs. Given the subject matter, I thought there would be a whimsical irony in having ChatGPT itself provide a response.
That response is in the next post. I have some comments I could make about it but I’ll ask you first if you consider this a useful critique, or “gibberish”.
The Dope skews markedly older than the general population and way, way older than the generation who have had the use of AI all their teen years or at least college years. New York ran an article about college students and AI. It basically estimated the use of AI to write papers and even exams at near 100%.
Questions about the trustworthiness of AI are for old fogies like us. The next generation will completely rely upon it. Much the same way that we the modern public completely rely on computerized systems for absolutely everything even while bitching about errors, glitches, outages, and lack of customer support. This is equivalent to saying that AI’s future is not at all a computer issue but a human issue.
The mere fact that the OP talked about General Questions when it’s been Factual Questions for many years is an example of how deeply so many of us are embedded in a remembered past rather than the present that outsiders encounter.
AI will take over. It will improve. It will displace jobs. It won’t end the Dope. We’ll do that ourselves by dying off. What’s the future of buggy whip factories?
I understand your point, but I don’t think it’s applicable. CEOs etc will of course do all kinds of stuff to save money and improve profits. Not all of it is necessarily bad, and some may be good or bad depending on the implementation.
But to answer your specific question, the big programming outsourcing craze was not something that ever directly affected my work. Some of my work was either as an employee or consultant for computer or software companies for whom software was their lifeblood, and they weren’t about to outsource anything that central to their core business. Other work was for organizations for which security was the top consideration, and they weren’t about to outsource anything, either.
Again, I get your point, but AI isn’t analogous to programmers in India any more than it’s analogous to call centers outsourced to India. And similarly, the fact that CEOs often do stupid things doesn’t mean that everything they do is stupid.
FWIW I asked my current AI to answer the exact same question you linked to and it had zero problem answering and took maybe 5 seconds to do it with the correct answer as verified by you and other posters there. Showed its work and everything (including converting units and using the kinematic equation for constant acceleration).
The problem is that you don’t know enough about the subject to know if it’s a reasonable answer or not. So you are very likely to propagate nonsense because it agrees with your ignorance.
@Stranger_On_A_Train, since you’ve declined to respond to my challenge (namely, what is significantly in error in the cited ChatGPT response critiquing a very technical post), let me give you my comments on it. I think it’s quite accurate, and the poster who wrote the original analysis doesn’t disagree, but does offer excuses in terms of reducing it to the corpus from which ChatGPT drew in order to produce that response. But I think the most remarkable aspect of that response comes from a seemingly passing comment in the summary.
ChatGPT states that the poster is obviously knowledgeable (as indeed he is) but criticizes him as someone whose “language seems shaped by a classical computer science or symbolic AI lens, which may lead to undervaluing the emergent capabilities seen in practice”. I find this kind of statement just stunning. It essentially identifies the poster as someone who may be undervaluing the power of new technologies like LLMs precisely because his deep knowledge of traditional computational methods may prejudice him against fully appreciating the power of novel emergent properties in leading-edge tech like very large artificial neural networks operating at absolutely unprecedented scales..
I’ve steered clear of AI all through the present boom. Prompted (heh) by the passionate discussion here and elsewhere, I took a stab at it today.
I went to ChatGPT (what version it is now, they didn’t tell me) and gave a dozen prompts over subjects I have extensive knowledge of.
Invariably, ChatGPT gave me a 2 - 3 page answer which addressed my question in seconds - way better than googling stuff, it would seem.
Reading through the answers, however, it felt literally like I had asked a guy who has listened over discussions about every subject matter but remembers only bits of them, misremembers just as much, and has the facts messed up, even within a sentence. There would be a true statement, followed by a ridiculous statement, or a completely messed up sentence (by content, not by form), where you can see shreds of facts cut up and mixed into a bogus answer, again and again.
Further, there was zero emphasis on what’s actually (most) important, with all of the 2- 3 page answers leaving out essential info, or emphasizing a minor issue just as much as a fundamental point. Like a guy who has heard things but doesn’t understand one bit of them (of course, LLM’s don’t understand anything), and now tries to regurgitate what he heard.
Some answers contained views that would have been state-of-the-art in about 1985, but have been generally known to be incorrect for a generation + of innovators, which is quite a remarkable a feat for a AI platform in 2025. Where does it get this stuff?
It didn’t take long for an actual hallucination to occur. I have heard of those but it was still startling to see purely made-up stuff being confidently presented as fact: a paradigm-shifting innovation in one field where a patent was applied for, granted, and the field forever changed in the 1960’s had a completely bogus inventor named and extolled by ChatGPT, with no mention of the actual, well-known inventor!
It seems to me ChatGPT in mid-summer 2025 is all but useless, even if every answer given contains plenty of snippets of truth.
I have not “declined to respond to [your] challenge”; you might have noticed that there are a few things going on in the world outside of this thread, as well as normal personal responsibilities that have commanded my time and an attention vice drafting a comprehensive response to your post. I will respond in due course as time allows.
Thank you for taking the time to do this and provide your feedback. That is, however, the most negative take on ChatGPT I’ve ever seen. If I understand you correctly, you asked around a dozen questions, and every single one came back so messed up that it was pretty much useless. I’m willing to have an open mind on this matter and stand ready to be convinced that GPT isn’t as good as I think it is, but what you describe is just completely contrary to my own experience spanning hundreds of interactions with it.
But it’s very hard to comment without knowing any specifics. I freely acknowledge that GPT makes mistakes and is far from perfect, but as I said before, I generally find it right far more often than wrong, and if it’s a subject I’m not closely familiar with, I’ve verified the information with credible sources on the internet such as academic websites. Sometimes I ask GPT for cites for the claims in its response, and it provides cites which check out as legitimate sources, such as researchers with a respected publication record in relevant fields.
I’m curious – did you challenge GPT on the information you believed to be inaccurate, or ask it for cites? What did it say?
Didn’t challenge the AI, per se. Neither did I think to ask for cites - remember, this was the first time ever that I used ChatGPT or any other AI source.
One thing that struck me was when I asked about a legendary figure in a niche endeavour, and ChatGPT struggled to know what to say about that name, not knowing about the connection between the name and the niche, even though the niche and the name have been in the Guinness Book of Records for decades, for instance, but still tried to suggest that maybe somewhere sometime there was a war hero or some such by this name (!)
When I then modified my prompt to include the niche endeavour with the name, ChatGPT replied: “Ah, now we’re getting into something interesting!” As if being relieved that now it had something to offer.
This was when ChatGPT went into hallucinations, making things up as far as what the name in the niche actually did, then making up a person altogether, as per my previous post.
The overall quality of the replies was dreadful, like lowest-quality hearsay by a layperson. Having to read pages of flowery mishmash of truths, half-truths, misunderstandings and made up stuff, then prompting the AI for multiple cites and see where it goes, would be just too much work for too little gain.
Imagine if it was a field I don’t have insider knowledge of?
Might as well google stuff, even with the current shitty googling environ.
Exactly. It reminds me of discussions on other forums I’ve seen (or it may have been the Dope), where someone expresses concern about the future for illustrator jobs, say, and there’s a wave of posts dismissing the threat, with hyperbole about 15 fingers and nonsense text. It misses the point, because what AI image generation does is already incredible, it’s being extensively utilized, and with the rate of progress, it’s highly unlikely that glitches like this are going to remain a problem for very long.
To emphasize further: I’m aware that AI is flawed today. But IME it’s already pretty great at answering open questions, hence why hundreds of millions of people have shifted voluntarily on to such platforms, and it’s unlikely that we’ve already hit the plateau.
I think that’s the worst dissing I’ve ever received on the basis of a misspeak
I don’t know if there even is any plateau in sight. GPT 4o is so much better than 3.5 that the pace of improvement seems to be increasing.
The reason I asked if you challenged it is because ChatGPT will readily admit when it’s wrong, which would be objective evidence that it had screwed up. As opposed, say, to some disconnect between what you thought you were asking and GPT’s interpretation of it.
I looked up and saw this softball over the middle of the plate that capsulized what I was saying. It’s you, me, and all of us, though. Even ChatGPT recognizes it.
Just like five-year-olds can help their grandparents use smartphones because their lives have always swum in that ocean and the grandparents need to both relearn and unlearn everything they think they know. That’s why youth will always win out.
That’s fair, and I’m genuinely interested in your feedback if and when you have the time. My apologies if my comment might have come across as arrogant, it’s just that you’re usually quite quick to respond.