Because that’s about what my company is currently paying per developer per month for what I wouldn’t consider heavy usage. And that’s still giving the product away at a big loss for these companies. Heavy users are currently spending a lot more on tokens.
900 million users when it’s free? What happens when the “free” tier moves to $100 a month? Why would these massive companies not milk as much money out of people dependent on their product as they can? What’s the motivation for keeping prices low? We’ve watched literally every product we use get enshittified by private equity. Why is AI going to be different? What are you going to do when it gets enshittified and you’ve forgotten how to do things?
eta: To be clear, my devs are getting dumber. They know they’re getting dumber, they will admit it openly. I’ve watched people fail to do simple tasks that they wouldn’t have had a problem with a year ago. I’m not familiar with the essay writing study, but other studies have shown a cognitive loss with increased AI usage. Studies have failed to show any productivity gains in software development that justify the current spend, and will certainly not be justifiable if we had to pay that real cost of the product. But once my devs are sufficiently dependent, what are we going to do?
It’s not clear here just what they’re paying for, or that the AI provider is necessarily losing money. I do believe that a major market for LLMs in the future is customized and specially trained task-specific versions for major corporations, and those definitely won’t be cheap.
Not all those users are on the free tier. Many (I don’t know what percentage) are paying $20/month.
Even GPT-5 is still an experimental product. If OpenAI turns into a profit-making enterprise some years from now and charges $100/mo for access, frankly I’d pay it and consider it well worth it for the kinds of capabilities that GPT will likely have by then. I’m paying almost as much for a goddam telephone landline that I hardly ever use.
If they’re losing $14b a year, if they charged their users $20 a month (assuming all 900m are freeloaders), they could lose 841,000,000 of their current user base and still break even.
You do not understand the scale of operating costs that these companies are targeting. Their annual losses are going to balloon if they don’t start charging more soon. And that deficit is already with lots of power uses paying thousands of dollars a month getting themselves addicted.
OpenAI’s predicted compute costs of $1.4 trillion in seven-to-eight years’ time are 70 times its current revenues. And that does not include any of its other costs, such as staffing, R&D, energy, water, and property. A money pit by any measure, therefore. And bear in mind, those compute costs may be an under-estimate if user numbers explode; I have seen figures as high as $3 trillion suggested by some analysts.
There is a finite amount of compute that we can produce as a species at the moment. There are powerful monopolies involved. This is not a case of “You’d better learn how to use a home PC because that’s the future,” where you could buy a PC and any software with a one-time cost and use it indefinitely. The rental model is baked in from the start.
eta: Think of it this way. What if every prompt you sent it cost you somewhere between $.50 and $10? And you were never sure where it would land in that range.
That’s what software developers deal with. The actual costs depend on how many tokens gets spent for the specific amount of compute the LLM decides to use.
This is Facebook circa 2009. You’re discovering new things, reconnecting with old friends, the feed is just your friends’ posts, nobody’s getting targeted engagement bait, they’re using your data to give you ads you actually want instead of bombarding you with scams.
How hard is it for people to quit Facebook now? I quit in 2020, and it was hard; I’d outsourced socialization to a company that was operating at a loss and was now intent on enshittifying the product as quickly as possible to fix that. Quitting felt like a divorce. I grieved.
You say $100 a month isn’t bad, but again, what about $2000? That’s not an exaggeration.
I would be very surprised if that becomes the costing model for the general population. I remember the days when accounts on timesharing computers were charged based on compute cycles, but I think it’s far more likely and far more practical that general user accounts would be charged a fixed monthly fee, and that when distributed over a very large number of users that fee wouldn’t need to be very high.
Enterprise users would be a whole different story, especially those with task-specific customizations. But I imagine many of those would host the system in their own data centers.
I agree that civilian users will get fixed, monthly pricing tiers. I disagree that it’s going to be a low price once distributed. The numbers just don’t work.
eta: And enterprise users can’t host their own data centers, nobody’s going to sell them chips. Frankly, enterprise users have already fucked themselves by moving everything off-prem, AWS and GCP (and Azure, to a dumber extent) are already realizing they can jack up prices now that everyone’s locked in. “Multi-cloud” doesn’t seem to be a reality for anyone.
They already have 50m paying users. It’s not clear if the $14b loss projected for 2026 keeps that flat, or (more likely) includes some assumptions about subscriber growth.
Of course they can! You can’t buy a mainframe computer any more? I think you’re mostly describing medium-size businesses. Back when I did IT consulting, one of my major customers ran a massive data center, and a second one for redundancy. Both had military-grade security, with defenses against things like army tanks. They are not going to outsource their data to AWS! Their AI is already – and always will be – in-house.
It’s already happened. Foundry output for at least the next 5 years has already been purchased by a handful of companies. Now we just get to watch it play out.
I think that AI could be a fairly good management tool if used to monitor the employees work to make sure the tasks are completed and that any systemic errors are noted and fixed. Producing summaries of their reports would be part of that.
To do that, I think that the manager’s AI would need to be working directly with the employee’s AI. And there would need to be good security to make sure that the activities of these AI would not escape beyond the company.
I was told about one facility about a hundred miles away from me that had a grass fire a few years ago. Apparently, the guards could not allow the fire department through the gates to fight the fire. They could shoot water through the surrounding fence but could not enter the premises under any circumstances.
Not disagreeing, but your assumption makes a much rosier picture.
OpenAI says they have 50M consumer accounts and 9M enterprise account which accounts for about 75% of their revenue. They project 25B revenue and 14B losses this year.
Although losses might be higher – I saw external estimates at 20B; users should be higher than today too.
That means they would need ~100-125M users to break even. This is my calculation based on $317/yr/user. They are targeting 220M users by 2030. So they have a ways to go before breaking even.