I’ve been castigated as a cheerleader for systems like ChatGPT, but while I can’t address the most important ways that advanced AI systems like this will change the world, and whether it will be for better or worse, there’s one thing I can say with some confidence. GPT is an information aggregator that places it a significant level higher than the internet itself as a source of knowledge, even if its results are currently imperfect. But as I indicated, probably in some other thread, GPT-5 Pro with deep thinking enabled has a hallucination rate on difficult medical questions of around 1.6%, which is an order of magnitude lower than previous versions.
I’m in Canada but I shared exactly that experience in exactly that area, because for many years I used to work for DEC which pretty much dominated that area for decades. Hard not to be optimistic about the future when you’re zooming around from one corporate location to another in a Bell Jet Ranger! The spirit at DEC was that there was just no limit to the future, until things started to turn sour in the 90s.
ChatGPT isn’t “outperforming doctorates” any more than a WebMD search outperforms your local physician. It provides a statistically likely response to doctorate-level questions based on doctorate level data in was trained on.
When ChatGPT can come up with new doctorate-level inventions on it’s own, then it is outperforming doctorates.
I guess that’s what I think is the major threat. This almost magical thinking belief than in 5-10 years, AI is going to be at a level where it can replace doctors, lawyers, computer programmers, engineers, any knowledge profession. Colleges and degrees will be useless. AI will do everything for everyone and the world will be a utopia!
So how do we prepare for such a future? Don’t bother educating our children for jobs that won’t exist? Just wait around for utopia to happen?
What if (likely) the benefits of AI never materialize?
What I see is that AI hype has tapped into social media, which at its core is a business based on monetizing attention through ad revenue. And if there is one thing that AI has proven good at is quickly generating content. So in a sense it has become a giant hype machine fueled by the very thing hyping it.
The difference I see between now and the 90s internet bubble is that at least in the 90s I, as a consumer, could see these new products and services being created. I could bank online. Buy and sell products online. Find a job online. Things I couldn’t do before. Plus I could also learn the skills and technology to build that stuff mysefl.
In contrast, these days ChatGPT helps me revise and optimize my resume so I can submit it against 1000 other resumes for a potential job where it will be analyzed and selected by another AI for a job I don’t really understand because it’s all described in a bunch of meaningless buzzwords.
And are people so naive to assume that this technology is being created for OUR benefit? Because politicians and Silicon Valley billionaires and venture capitalists have such a great track record for altruism? Oh I have no doubt many of them actually believe they do. Which almost makes it worse and they will approach any conflicting view with all the fervor of a religious fanatic prevented from creating their utopia.
I must admit that as a retired pup I’m evaluating GPT from the standpoint of being a useful information resource, not from the standpoint of how it will affect job markets which I think is largely an unknown.
In technology ventures, there are always two key questions:
The technical question – can it be done? Should we be throwing money at this? In the case of AI, I think the answer, if the subject “it” means increasingly functional, self-learning, self-improving AI systems that will be extremely useful for specific functions, is absolutely yes. As opposed to AGI in particular, which I think is something of an abstraction on the distant horizon and of no obvious value.
The moral question – should it be done? That one is impossible to answer with confidence, but two things can be said. One is that the answer is usually “yes” because historically, advances in technology have almost always improved our lives, despite fear-mongering to the contrary by Luddites and their ilk. The rise of social media, and the abuses thereof, may be a rare exception. But the other thing that can be said is that the moral question is largely irrelevant because if something can be done, then sooner or later it will be done. This was the case with nuclear weapons, as much as we devoutly wish they didn’t exist, and which are a far greater threat to our future than all the pearl-clutching about the risks of AI.
Now look at them yo-yos, that’s the way you do it You make bizarre animations on the GPT That ain’t workin’, that’s the way you do it Money for nothin’ and your chicks for free
Yes, it sifts through pre-existing knowledge to find helpful answers, which is what a lot of people with advanced educations do.
And who knows how long before this starts to happen. But it will happen eventually. Whether its 5 years or 50 years, it’ll eventually happen.
Which is fine. But for me, I’ve had medical problems for years and years I couldn’t figure out. I used AI, it helped me find potential hypothesis and helped me find the right kind of medical professional to do testing to see if I had the problems I thought I had. It helps me understand radiology notes. It has helped me solve some IT issues. It helps me at work when I use notebookLM to search through technical documents for that one piece of information buried in 3,000 pages. It helps me understand the mental health issues of people I know. GPT-4 was only released in 2023, we’re just beginning to see what we can do with LLMs.
Of course not. Musk and Thiel are evil. There is going to be a massive fight to redistribute the wealth and power created by AI, just like there was a massive fight to redistribute the wealth and power created by the industrial revolution. Sadly the industrial revolution started in the late 18th century, and in the US at least, we didn’t see meaningful redistribution of wealth and power until FDR in the 1930s and 1940s. I am hoping it doesn’t take 160 years before the benefits of AI are redistributed to the masses, but the sad reality is plutocrats and dictators are going to try to monopolize AI which is going to make the 2030s and 2040s both extremely exciting and terrifying at the same time.
I have no idea what effects it will have on the economy or political system. I have no idea what effect it’ll have on jobs. But humans are not getting better by and large. We are designed by biology and our brains and bodies are pretty much the same as they were 100 years ago. But AI and robotics are advancing rapidly, and we could be looking at a world in ~20 years where most tasks (both cognitive and physical) can be done better and cheaper by AI and robots. I do not know what effects that’ll have on the global economic system.
But maybe in 10-20 years things are only slightly more advanced than they are now. That’s entirely possible, but eventually machine cognition will vastly exceed our own which will blow open a lot of bottlenecks that hold back progress and advances.
Cal Newport, professor of computer science at Georgetown, calls this “vibes reporting.” AI is the second coming or the Apocalypse depending on what article you read, but the narratives exist fundamentally for the investors. This whole drama actually has very little to do with the rest of us. Whichever way it shakes out they’ll find a way to monetize us.
I’m really very angry about having shitty AI products pushed on me by every service and corporation under the sun. Not a single one has been useful, but enshittification happens so quickly now I don’t actually think they even sit down to consider customer experience anymore. They don’t really have to, do they?
I feel I should also point out that from the 1800s to the 1930s and 1940s that massive fight to redistribute the wealth and power created by the industrial revolution nearly tore the world apart as various nations used to harness their industrial might against each other in new and interesting ways.
All of these tech bubbles (internet, web 2.0, mobile, social media, and now AI) have created an investment industry with a psychotic case of FOMA. It permeates into the products because they are afraid someone will choose the competitor because “This one has AI”.
I agree about the crappy customer experiences. The scary thing is we are not at the enshittification phase yet. Companies are still trying to give their stuff away at a loss. Wait until they want us to pay for the actual costs of the power and hardware. I get a lot of use from Chat GPT, but there is no way I’m only costing them $20/month.
This is true. We really have no idea what will happen when machine cognition vastly surpasses biological cognition. Civilization ending technologies will become much easier to create.
One would ‘hope’ that because western nations have much stronger democratic traditions than they did in the 19th and early 20th century, that we could redistribute the benefits of AI more evenly without as much of a fight. one would hope at least. Who knows what can happen. The US is moving towards authoritarianism, China (who is #2 in AI) is a totalitarian state.
It’s estimated that by the end of the year ChatGPT will have about a billion users. Even if only one-tenth of them are paying the monthly fee, that’s $2 billion a month. That pays for a lot of power and hardware. OpenAI is probably losing money right now because they’re still investing heavily in R&D and training, which is very compute-intensive, but if they switch to a mandatory subscription model I don’t think it will need to be very costly. They’ll probably also get substantial revenues from the sale of customized systems to large corporations. IBM is already selling Watson, based on the DeepQA engine, to companies for high-level business analysis, but I don’t know how successful it’s been.
Or not even that; it could cause a disaster just by following orders that the order-giver didn’t think out clearly enough. Especially since the desire to create a perfectly obedient AI makes that possibility larger, not smaller. That’s a good way to end up with a “literal genie” situation where the AI does exactly what you told it to do, in a way you really wouldn’t want it to.
It’s the Skynet brainbug; Terminator was culturally influential enough that people who worry about AI have focused mostly on the “genocidal AI rebellion” scenario, and not all the other ways that things could go wrong without rebelling.
OpenAI is setting its sights on a future marked by unprecedented financial commitment towards artificial intelligence development. According to recent forecasts, the company is expecting AI development expenditures to soar to a staggering $115 billion by 2029, significantly surpassing its previous estimates of $35 billion. This surge in projected costs underscores the vast scale of ambition at OpenAI, driven by the need for increased compute power, expanded data centers, and the intricate training of their AI models.
This all seems quite manageable. According to the article, they’re projected to have revenues of $12.7 billion for 2025, which appears to be greater than their expenses this year. The rather staggering $115 billion in expenditures by 2029 by my reading would be the lifetime total to date of all their expenditures on development and infrastructure. With an expected big increase in R&D and infrastructure costs, they may have to go to a broader-based mandatory subscription model. Even if they lose half their current subscribers, $20 a month from 500 million subscribers can easily cover their costs. And that’s not counting other sources of revenue, like individual sales to big businesses.
Quite so - this is the part of the core of the alignment problem; not that we cannot create an obedient, efficient machine, but rather, we can’t task it perfectly because our own goals are not perfectly coherent and understood - in today’s world, the effects of such imperfect goals are attenuated by the limits of our competence and effort, but we’re trying to build something massively more competent and tireless than ourselves.
The linked Tech Central article doesn’t specify any revenues. Bloomberg reported a ‘revenue’ of US$12.7B (reiterated in this CNBC article which was “confirmed to CNBC by a source familiar with the matter who asked not to be named because the number is private”); however, a similar 'leak; to The Information indicated that this was “annualized revenue” also known as annualized recurring revenue, which is a way of estimating annual revenue based upon a single month (or other period short of an actual year) extrapolated out over 12 months. Even if actual revenues were accurate (rather than moving cash around on a balance sheet) it is still not an honest way of estimating annual revenue, and is inconsistent with reported subscriptions of about 10M.
In order to even cover the $115B (capex in building out data centers and costs for running them) for model training would require about 200M paying subscribers, a 20-fold increase in paid subscriptions for a company that doesn’t actually produce a saleable product and whose service doesn’t actually have a clear use case to justify such a subscription beyond ‘your virtual pal who tells you sweet lies’. This doesn’t even account for operating costs which, scaling up from 700 million mostly casual users to people getting enough use to justify a paid subscription is a steep increase. I don’t know where “500 million [paid] subscribers” would even come from; Netflix, the world’s largest streaming service only has about 300M subscribers and they shed users every time they crank up their subscription fee by a couple of bucks.
But even if we can assume an effectively infinite pool of subscribers willing to pay out US$240/year for a chatbot service, OpenAI (and frankly the other chatbot providers who are depending upon indefinite scaling to keep up) have an even bigger problem; while the money may be speculatively created out of hot air and bombast, the ‘compute’ depends upon physical resources and electrical power that has very definite scaling limits. Between the ‘Stargate’ initiative and its own operational resource needs OpenAI has committed to increasing capacity by something like 15 GW (it’s a little confusing because Altman keeps mixing and matching data centers and functions like a street peddler playing three-card monte but the claims are all in that range), which at about 2.5 years and US$12.5B per GW it is taking Oracle and Crusoe.ai to build out data center capacity doesn’t even come close to their schedule or investment costs. Building out ‘compute’ on this scale on demand is pure fantasyland notwithstanding the impact it would have on electricity prices and scarce water resources (especially for computing facilities in Doña Ana County, NM).
But rather than go into this in further detail, I’ll just reference readers to Ed Zitron’s thorough and detailed assessment of OpenAI and Oracle’s claims:
They don’t expect to be cash flow positive until 2029.
OpenAI doesn’t share how much they are spending on GPUs or data centers, but with Meta and Microsoft claiming to spend $60B or $80B per year it’s likely to be greater. The partnership announcement with NVIDIA has NVIDIA investing up to $100B for each of the 10 AI data centers. $20/month doesn’t cover it.
I don’t understand how we would control something that is smarter and faster and better connected than us any more than a dog “controls” it’s master. We do the human equivalent of barking and scratching at the door and hope the AI is benevolent and compassionate enough to feed us and take us out?
And whose to say any of humanities problems are “solvable” simply by throwing more intellectual horsepower at it? AI won’t create more land or raw materials or bend the laws of physics.
That’s sci-fi shit a long way off anyway.
I honestly find the near-term use cases for AI totally depressing.
Displacing lawyers, accountants, programmers, consultants, and other highly paid knowledge workers.
Generating low quality social media posts, ads, images, and other “AI slop”
Deepfake videos and images
Replacing human interactions for activities like job interviews, customer service, dating, education, etc
Cheating on schoolwork
To me it just seems like AI is just creating a lot of disruption in society without really replacing it with anything better and ultimately only benefits the people in charge and their shareholders.
By making it compulsively obedient and/or loyal; a mind controlled slave in other words. However besides the hopefully obvious ethical issues with that, such an AI is effectively insane, something that proponents of the idea like to ignore. You’re deliberately creating an AI with compulsive behavior and/or distorted judgement, and hoping you’ve guessed right that the results will favor you instead of it deciding that the obvious thing to do to “help” you is to, say, kill your spouse because they are bad for you or stick wires in your pleasure center to make you permanently happy.
Or some other insane thing, because you’ve deliberately crippled the judgement that would allow it to decide “Hey, maybe that’s a bad idea.” There’s a reason why perfect obedience is a form of malicious compliance.
So while I understand the fear of AI rebellion, I don’t think the solution is to create obsessive-compulsive AI fanatics instead.
You left off generating ‘dank memes’ and spewing disinformation on an industrial scale.
The practical use case for LLMs is (and always has been) primarily being a natural language interface to computer systems so that the user does not have to learn a specific interface. All of the ‘emergent’ capabilities which seem impressive at first glance turn out to be not very reliable, prone to error and intentional misuse, and there is no real path to ever being adequate for 'production use without significant ‘augmentation’ by more reliable knowledge or functional systems (or more properly, using an LLM to augment the interface of these system), plus coming with substantial societal and possibly personal harms to users who are not aware of their limitations.
And to be frank, I don’t think it really ultimately benefits “the people in charge and their shareholders”; there is the fleeting advantage to speculators to driving market valuations for these companies to the stratosphere based upon a bunch of hype and an inability to critically read a balance sheet or prospectus, but when the bubble deflates the people who thought they were on top are going to be just as fucked as everyone else. Sam Altman is clearly in a mode of just hoping that this all works out somehow even as he pimps ChatGPT for all he can to keep that capex funding rolling in.