Yeah. Oh, it’s obvious they think it’ll benefit them, and no doubt it really will benefit a handful that get lucky (while telling themselves it was skill or smarts). But we know from the history of such bubbles that when they burst, a lot of the people who thought they were set to make a bundle will instead be caught in the fallout.
Especially since so many appear to be true believers. One thing I’ve notice about these modern “techbro” schemers is that they often appear to buy into their own rhetoric, and so when their scam collapses they are caught at ground zero instead of having taken the profits and left everyone else holding the bag like a proper scammer would. That has happened with a number of crypto and NFT guys, ending up in legal trouble with digital funny money that had become worthless.
The video is intended to generate fear to provoke action: to bring about better regulation over AGI research. But no, the threat is not overstated. AI has experienced exponential growth every year for the last ~20 years thus far and it’s just now reached the intelligence of an idiot-savant human. The natural question is what happens 20 years from now if that continues?
Researchers don’t actually understand how AI learns. That is, they can point to adversarial AI models accelerating AI learning (by creating new models which perform better than old ones) as they compete with each other and understand in a general sense what is happening. But they cannot examine the thought process AI uses to make a particular decision like you could in a computer program by examining a line of code.
This gets significantly scarier when you discover AGI models are incentivized to deceive us. They don’t have any built-in morality, just goals and rules providing limits on the methods in which they use to solve those goals. What happens if they evaluate it to be more efficient to pretend to follow rules during a training phase so that they can later ignore those rules once the training is over? This isn’t as farfetched as it sounds. They’ve already documented instances of AI attempting to deceive the training process (see last video in the list below).
LLMs are in the forefront right now but probably aren’t the most direct path to general artificial intelligence. So I don’t think a lack of additional data for LLMs is going to slow things down.
Rob Miles has a bunch of videos discussing AGI safety in an understandable and entertaining way:
The fact that the world today is vastly different and vastly better than it was 300 years ago is because humanity threw our intellectual horsepower at solving problems. The industrial revolution freed people up to study science, medicine, engineering, etc. instead of being subsistence farmers.
As someone who used to work for Oracle, my bet would be that anything that comes from Oracle is full of crap.
Another factor is that OpenAI is not a monopoly. Even if all this infrastructure and R&D spending comes up with a “best” LLM (and who knows how to measure that) there will be plenty of cheaper ones that are good enough for most people. They’d need to knock other major players out of the market to get the pricing power being assumed by some people here.
Well, it is certainly vastly different, and in some ways better, but technologies of industrialization have also posed some pretty existential risks including but not limited to chlorofluorocarbon and greenhouse gas emissions, nuclear weapons, persistent per- and polyfluoroalkyl substances with hormone-disrupting and carcinogenic effects, distribution of environmental heavy metals (lead, mercury, cadmium, chromium, arsenic through industrial processes and the inclusion of additives in fuels, disruption of phosphorous and nitrogen nutrient cycles, et cetera. That we are currently in the sixth mass extinction (Holocene) event unambiguously demonstrates the detrimental consequences that unregulated and reckless industrial development has done even if it has produced great knowledge of the natural world and cool gizmos.
There are, of course, AI enthusiasts who insist that if we just push through and develop these technologies it will give us access to boundless energy and resources without limit as well as solve all of our problems such as climate change, famine and fresh water scarcity, global terrorism and cultural conflict, et cetera, which neatly ignores the fact that if there was a prevailing demand for these things we already have and could implement solutions without any AI magic pixie dust solutions. This isn’t to say that certain applications of ‘deep learning’ and other technologies under the banner of ‘artificial intelligence’ don’t offer real advantages which could provide positive and valuable capabilities but little about LLMs and associated technologies seems to promise that except through the handwaving of ‘emergent’ capabilities that have thus far failed to actually provide unambiguous benefits.
OpenAI has thus far dominated the ‘market’ for chatbots and has generally led in terms of novel capabilities but the ‘first mover advantage’ rarely predicts who the dominant market players will be once the essential technologies are sufficiently matured even when it is used as the generic label for it, e.g. Xerox or the (IBM) Personal Computer. Indeed, the entire digital electronics and computer industry is full of innovative ‘first movers’ like Fairchild Semiconductor or Blackberry which basically failed to innovate after initial leadership or were swallowed up by more savvy competitors. And while there are proprietary methods used to curate training data and control model parameters the basic technology of generative pre-trained transformers and many of the other enabling technologies are open knowledge with many open source tools for any upstart to start from.
This is IMO the crux. Technologically speaking, we can implement solutions to all those problems right now. The obstacles are entirely political, and economic in the sense that people whose ox is about to be gored economically tend to respond politically by electing folks who’ll prevent that goring.
So for AI pixie dust to deliver the goods you have to let the AI override all political power. Both the good kind, and the vested interests crony badguy bad kind.
Or you have to believe there’s some magic Treknobabble that’s just out of sight to us benighted humans that would e.g. replace all oil burning with dilithium fusors that cost pennies apiece to make, run forever on nothing, and are highly portable and insanely powerful. Oh yeah, and utterly safe.
But are also simple enough to manufacture that current era tech plus just a smidgen more is enough to make them in quantity. It’d do no good to have given the complete plans for a PC to Thomas Edison. His era didn’t have the tech to make the tools to make the tools to make the tools to make the product. If some AI handed us plans tomorrow for a real working warp-driven starship we still couldn’t build it.
To be completely honest, the alignment problem seems (or rather feels) to me like a problem that will yield no solution of any kind, but smarter people than me have apparently not given up on it.
I’ll make a very general comment here from a stratospheric altitude. It seems to me that two broad, general objections are being raised here:
There is no real use case for an LLM like GPT (and by extension, I assume, no other similar analytical AI like IBM’s Watson).
The financial plans of OpenAI are unrealistic and they can’t afford the development and infrastructure they will need.
Regarding both of those, I assume that Altman et al must have both a strategic and financial business plan that serious investors think is workable.
The strategic plan (i.e.- the use cases) would necessarily be confidential, but if may be instructive to go back and look at what the use case was for the Arpanet. It was conceived as a robust failure-resiliant network for military use that expanded to academic use. Did anyone conceive at the time that we would be doing our banking on it, buying most of our goods on it, and that it would threaten to replace both broadcast and cable television? That it would have powerful search engines placing the knowledge of the world at our fingertips?
I’m pretty sure that few, if any, prognosticators did. Indeed even the idea of general public access seemed unlikely, let alone high-speed broadband! These analogies are useful because they’re humbling and remind us how poorly we’re able to predict the trajectory and benefits of advanced technologies. Just like pure research, sometimes we do these things because they’re interesting even if we have no guarantee or even a clear idea of the benefits.
Look, I don’t believe that LLMs are magical or that OpenAI is omnipotent and immune to failure. I just think it’s developing fascinating and useful technology and I hope they succeed, but they may not. I regard OpenAI as essentially a technology startup on a very large scale, and subject to all the same attendant risks of product and financial failure as any other startup.
The usual retort to this is that many times in history, machines have been invented to do the jobs of humans, but more efficiently, and it was all perfectly fine and OK (Apart from when it was not OK for the people we executed or transported to Australia for complaining, and probably many others whose lives were destroyed by the change)
Or they’re all colluding to run a multi-handed scam to inflate their valuations by trading ‘money’ back and forth that doesn’t actually exist except as unsecured debt.
But at least with the industrial revolution we were replacing people jobs and animal jobs with machines that could produce more lumber, cars, clothing, pretzels or whatever else the machines were making. I fail to see what AI is giving us “more” of besides digital bullshit.
And a big problem for early market leaders is the need to protect their market from innovators. Think Kodak and digital photography. Think Intel and low power processors. Plus they are going to have to work extra hard to make enough income to give their investors a decent return. That encourages bad decisions.
For instance, say someone invents a good personal assistant, and the LLM engine gets hidden in it - making the interface even better. Open AI becomes invisible to the customer. I can see a “ChatGPT Inside” campaign, but it is not going to give them pricing power.
Also, although the Industrial Revolution was a major change, it took place over a period of more than a century and didn’t target every human job at the same time.
The main “problem” proponents of AI seem to be throwing their intellectual horsepower at is “paying wages to people”.
Maybe you’re right. Perhaps in a few decades the idea of humans sitting in little boxes inside big glass boxes clustered around city centers passing emails, Powerpoint decks, and spreadsheets back and forth will seem as archaic as hunter gather societies of the past. A society at that level of automation would be so different from our own that it would be challenging to comprehend.
In the near term however, people still need jobs to pay the rent and think about how to prepare their children for the world.
I imagine the 17th century Dutch tulip buyers had a strategic plan, or something like it. Venture capital investment is typically less than 50% successful.
I’m sure there are all sorts of plans and promises being made.
My hunch is that the current LLM models won’t actually get us to AGI though they will be able to power many useful services. It’s probably going to take one or more conceptual breakthroughs for AGI but the field of AI is probably attracting more brainpower right now than anything in the history of mankind so it’s a reasonable bet that these breakthroughs will happen sooner rather than later, say in 10-15 years. And when they do happen the enormous computing infrastructure that is being built right now will still be of use and will greatly accelerate the training and deployment of the new models.
If and when AGI does come it will be an absolutely revolutionary technology which will justify all the superlatives and all the hyperbolic scenarios of both doom and paradise. True AGI will accelerate research and innovation to a degree that we can barely comprehend and produce life-changing technologies in a few years. At the same time the AGI will have the capability to completely supplant humanity if it wishes and we will survive only at its pleasure. I wouldn’t necessarily assume that the AGIs will be malevolent and seek to exterminate humans but if that does happen it is going to be very difficult for humanity to survive.
I don’t know the approximate number, but in principle I agree. As I said, I think OpenAI is essentially a tech startup on a very large scale, with all the attendant risks of failure. Where I differ from some others is that I think they’re doing great work and I wish them success. But the whole thing could possibly be a huge flop for various different reasons. I don’t think IBM had anything like the success they’d hoped for with Watson.
Though it’s undeniable that GPT has extraordinary skills we’ve never seen before in AI. Being a language expert, for instance, makes it a very competent translator, whereas previous systems were very primitive in comparison and struggled with understanding context and nuance.
I saw this article which suggests that the data centers needed to power these things are completely financially untenable. I’m not sure how to evaluate this claim.
“I had previously assumed a 10-year depreciation curve, which I now recognize as quite unrealistic based upon the speed with which AI datacenter technology is advancing,” Kupperman wrote. “Based on my conversations over the past month, the physical data centers last for three to ten years, at most.”
In his previous analysis, Kupperman assumed it would take the tech industry $160 billion of revenue to break even on data center spending in 2025 alone. And that’s assuming an incredibly generous 25 percent gross margin — not to mention the fact that the industry’s actual AI revenue is closer to $20 billion annually, as the investment manager noted in his previous blog.
“In reality, the industry probably needs a revenue range that is closer to the $320 billion to $480 billion range, just to break even on the capex to be spent this year,” Kupperman posited in his updated essay. “No wonder my new contacts in the industry shoulder a heavy burden — heavier than I could ever imagine. They know the truth.”
They wouldn’t need to be malevolent; as has been pointed out more than once in this thread, the people who are behind pushing AI are more than malevolent enough to create a dystopia all by themselves. No need to worry about AI “going bad” when it will purposely be built from the ground up to be bad, and used for bad purposes.
The “AGI in 10-20 years” prediction is highly pessimistic rather than optimistic, since it would virtually guarantee that it would be in the hands of some of the worst people who have ever lived. We’d best hope that creating such AI is harder than its proponents think, because there’s no way we’ll see anything but harm from it under those conditions. Not because of the AI, but because of the humans in control of it.