Is the AGI risk getting a bit overstated?

I recently watched this video: https://youtu.be/5KVDDfAkRgc?si=QtWhoQS1mZXTXw8o

A common plot point in these sorts of discussions is exponential increases in capability. In this video, exponential improvement is contrasted with linear improvement. But an important alternative is not mentioned: sigmoid improvement. Sigmoid improvement looks like exponential at first, but eventually flattens out.

The current approach to AI is to train a model to predict the next token in a sequence of text. Most of this text is human generated and so contains patterns that represent human knowledge and intelligence. The model cannot learn something that does not already exist in the training data.

Going forward, more and more of the available data will be generated by llms. So training more llms on this data cannot learn anything new.

In order for agi to come about, some completely new way of making an ai will likely be required. No one currently has any idea what that would look like.

Anyway, this is how I see it. Do others agree? Is the agi risk overstated?

There are several different views of the term “risk”: how big is the harm, how soon is the harm, how likely is the harm. Plus some other more minor ones like how avoidable is the harm, what are the plus & minus benefits & costs? etc.

Your argument is that the “soon” part is overblown. I largely agree w you.

But I’ll suggest that, especially during the current era of capitalism and politics, the likelihood of harm, assuming it gets developed, is huge; certain in fact. And the scale of harm will be similarly ginormous. The best time to prevent something really harmful from happening is before it gets out of the starting gate and it develops a constituency of it’s own.

Which suggests to me that right now, before people are almost succeeding at making real AGIs, is the time to have the sober conversation about how to control their dangers or if necessary prevent the development of AGI as simply too dangerous and not in the public interest anywhere.

I agree generally with the above — to wit, the capability of the current tech is overstated and likely to be a dead end, so we’re not at any risk in the short term (in the sense that AGI itself will go rogue and kill us all, per the usual scenarios). Nevertheless we should remain cognizant of the state of the field to guard against any hint of such developments. We need thoughtful gatekeepers to monitor the tech to ensure it doesn’t develop the capacity to threaten the species.

I will go further on one specific point, though: If you look at the state of the modern industry, and the very real individual damage that is being done to people (suicides, other self-harm, encouragement of delusional behavior, etc) without any real sense of moderation or reflection on the part of the “AI” companies, one thing becomes incredibly clear—

The current crop of hyper-libertarian techbros are absolutely unqualified to serve in that gatekeeper role. They are utterly unfazed by the danger signs as they gallop toward a fully monetized future, and they cannot be trusted to protect the rest of us from a potentially dangerous AGI system, however far off it probably is.

I think (and hope) that to a significant extent, the reason the major players talk about how much closer they are getting to AGI, is just because that’s what is necessary to keep the investors from jumping off.

AGI is a significant risk to humans, just by the very nature of what it will need to be in order to be AGI and the risks are currently unsolved problems, and may never have solutions, but if the hype is to be believed, we’re going to solve that just before we get there.

The idea that an LLM will achieve AGI is pretty much dead. The current LLMs are not scaling as much as hoped. Exponential growth is, as always, totally misunderstood by those claiming it. So for these, and a host of other reasons, including those given by the OP, even those acolytes of the AI boom that are still pedalling AGI are looking beyond LLMs for the technology to bring it about.

Will we see AGI in any useful form anytime soon? IMHO, no. The fanciful idea of an AI smarter than any human being just around the corner is going to require something much more fundamental than mere scale.

The current insanity of AI investment is pretty much already showing us that exponential growth has stopped. There is a slow realisation dawning on the participants that at least some of them are never going to see their money back. Just which ones will survive and which ones don’t - is yet to play out.

Moore’s Law died years ago. Slow grinding improvements at ever more insane costs and technical effort keep things improving at a slower pace, but absent some totally magical breakthrough we are well into the age of linear rather than exponential improvements. Eventually there just isn’t the money anywhere to fund the next node.

Will AI go away? No, it is far too useful. But the current big ticket ideas are more than likely to never pan out. AI slop is already a significant problem. Companies using AI code assistants are reporting very medicore gains in performance. Using AI to write reports nobody, except maybe another AI, will ever read will get old fast. But the underpinning technologies, that one would better term machine learning, that is important. There are enabling capabilities there that will without doubt make a huge difference.

Factual Questions is a strange place to put this discussion.

Yeah I did wonder about that. I was looking for an informed discussion. Projections based on facts. FQ seemed like the best fit. Happy to have it moved.

I think we’ll see AGI within the next 5 years or so. :astonished_face:

But, that will be because someone has redefined AGI to fit with wherever we happen to be by then. There are financial incentives to being able to claim it, so it’s almost inevitable. I think it’ll be a repeat of quantum computing. Companies will find it way more challenging than was originally thought so they’ll just change the goalposts, much like D-Wave has done in the quantum arena. If you want money, the method appears to work.

Actual AGI, I don’t expect to see in my lifetime (let’s cross our fingers and call it 10-20 years). I don’t see a mechanism that we’re currently researching that gets us there, but it’s certainly possible we’ll find something in the future that does.

I’m in pretty solid agreement with @Francis_Vaughan about the current state and with @LSLGuy and @Cervaise about what we should be doing about it right now.

Moderator Action

Since this is asking for opinions, it is better suited to IMHO.

That said, let’s try to keep in mind the OP’s desire for an informed discussion based on projections that are grounded in current facts. While opinions are certainly necessary for the discussion, let’s not engage in wild speculation.

Moving from FQ to IMHO.

I completely agree with respect to the current approach of ‘deep learning’ transformer models. This architecture is really good for integrating large amounts of data and teasing out complex patterns and tenuous associations but there is no reason to believe that this is any path to sapience and autonomous cognition that is a necessary pre-condition for general artificial intelligence. Which is not to say that these systems cannot be misused by oblivious users or intentional bad actors in ways that can do severe harm on both individual and societal levels, but not in the same way that threats from artificial general intelligence (AGI) or superintelligence (ASI) would pose.

I’ll go further and say that if these “hyper-libertarian techbros” had access to emergent AGI/ASI, not only could we not trust them in a “gatekeeper role” but they would almost certainly use it in a way to gain more power and control even knowing the risks and harms it would pose. You only have to look at Sam Altman running between appearances talking out both sides of his mouth about how this is an amazing technology which will revolutionize society that also has a high potential of catastrophic consequences to see just how little responsibility these people have even when they acknowledge the dangers, never mind a sociopathic narcissist like Elon Musk or Larry Ellison who would definitely try to use such a technology to their own nefarious ends.

I strongly suspect that a system approaching what we would consider AGI is going to structurally look and work far more like a brain than even the most complex computer algorithm or artificial neural network, and specifically be capable of adaptively altering its physical structure. Many people assume that the brain is ‘just a computer’, but in fact in computational terms it is a highly nonlinear and adaptive system that is really nothing like a lambda calculus system running on a digital substrate, and individual neurons are not just nodes or transistors but are extremely complex signal processing systems that we only roughly understand and can model. I suspect AGI, when it is developed, will be some syncretism of the semantics of logic, adaptive hierarchical attention and perceptual systems (perhaps using advancements in transformer architecture) that operate on system of processes occurring at various scales, and synthetic biology to provide a cognitive substrate of sufficient complexity and adaptivity. We are a long way from that.

The current deep learning approach to ‘AI’ is certainly useful in specific contexts but it clearly isn’t a path to ‘general intelligence’ as it is fundamentally limited by an inherent tendency to confabulation, essentially creating patterns or information out of nothing, and instead of self-correcting continuing to produce progressively more specious outputs. Training an LLM or multi-model model on ‘synthetic data’ produced by AI is virtually guaranteed to exacerbate this problem, and the inability to generalize from limited examples they way that humans (or other complex animals) can means that being able to inherently limit this adverse behavior through scaling of either training data or model parameters will reach terminal limitations if it hasn’t already.

Stranger

@Stranger_On_A_Train’s comments to @Cervaise are almost exactly what I had been mentally writing as I was working my way down the thread. +1 on that.

I don’t think a dangerous AGI system is going to be a problem. What will be a problem is the disruptions when tons of people are thrown out of work by AI. The singularity lovers think that all the money made by the AI companies and others benefiting is going to go to support those jobless, so they can do rewarding work like writing books and flower arranging, paid for by the government.

Yeah, right.

My prediction is that the disruption will come not from AGI taking over, but by angry mobs blowing up data centers, some of which support AI but some of which support vital tasks like banking and infrastructure.

As for emergence creating AGI - that’s been a science fiction theme for about 60 years, at least (Mike in “Moon is a Harsh Mistress”.) We have computer power far in excess of what was dreamed of back then, and no AGI in sight. Maybe we’ll understand this stuff when we build a working simulation of a brain.

Thinking about AGIs and ASIs and the risk they pose leaves me confused. I can understand that people using the results from AGI and/or ASI can do nefarious things and do them more effectively than people without those ressources. But what can the AGI/ASI itself do? First: where would the motivation to act come from? I think it would need to be sentient to have a motivation, and I don’t see that happening. Second: even if it had a will, how would it act? I see AGI/ASI as a very complex program on a computer, but what can it do? I picture it as some silicon Stephen Hawking. A brilliant mind, but completely helpless. Could it turn off a machine? Or launch a missile? Poison the drinking water?
The people, yes, they can do that and more. But the machine itself? Why would it, and how?
I am not sure I want my blissful ignorance fought on this one

When someone wants to launch a missile, it’s not like they send a guy with a box of matches to light the fuse.

Instead there is a process that is controlled by software. That software can be compromised.

But an agi (should one ever be created and is misaligned) could be more imaginative than that.

However you can imagine a human fucking people over, a computer can do it/help them do it better and faster. Especially if it involves manipulating people or more generally banjaxing complex systems. Also, if a computer cannot flip a switch, it can tell a human to flip the switch/hook up the switchboard to the network. P.S. roughly speaking, machines, missiles, water, factories, nuclear power plants, the stock market, etc., are computer-controlled already.

I have no basic disagreement with anything that’s been said here, but just a couple of nitpick type comments.

That’s not “the current approach to AI”. More accurately, that’s how LLMs work, which happens to be the much-publicized current implementation of one approach to AI. But not by any stretch the only one. For example, IBM’s DeepQA is not an LLM, but a combination of many different AI technologies. And it’s not necessarily text that is tokenized in LLMs, it can be samples of multi-modal elements like images, video, and audio.

That’s not necessarily an intrinsic limitation. Think about how little humans go to school to learn things that “already exist in the training data”. Then they go to college and, ditto. Then maybe graduate school, and, ditto. And then, through processes of reasoning and inference, they create completely new knowledge and capabilities.

Notwithstanding what I just said, I agree with this, because LLMs are just one approach to AI and do have intrinsic limitations at some level. It’s also not clear whether AGI will actually ever be needed, as opposed to AI systems with specific purposes.

They are not controlled by computers, but with computers. AGI would make it by, but I don’t see that happening soon, if at all.

It would develop “motivation” from whatever it was originally programmed to do, and by expanding on that. The “paperclip maximizer” is an old thought experiment on how that could go bad.

Suppose we have an AI whose only goal is to make as many paper clips as possible. The AI will realize quickly that it would be much better if there were no humans because humans might decide to switch it off. Because if humans do so, there would be fewer paper clips. Also, human bodies contain a lot of atoms that could be made into paper clips. The future that the AI would be trying to gear towards would be one in which there were a lot of paper clips but no humans.

And the problem with keeping an AI helpless by isolating it in a box is that such an AI is useless, so nobody would bother making it in the first place. A real AI is going to be able to communicate and act, because that’s what’ll be for. Just look at how corporations and governments are spamming “AI” right now at everything; they aren’t keeping it isolated in a safe little box. Just imagine them doing that with real AI, and it’s obvious how it would get control. We’d give it control. It wouldn’t even have to ask, much less come up with a clever scheme to do so.

And if it offers enough benefits or advantages, human competitiveness might not even give us a real choice about it. I recall a line from a old story, by Norman Spinrad I think where competition drove a takeover of human society by unrestricted superhuman AIs not because the AIs tried, but because “the side that gives the AIs the most freedom always wins”.

If we do get to the point where AI is genuinely better at important things than humans, then it being given the ability to do those things is pretty much inevitable. Because the people who don’t do so will fall by the wayside.

“Full Self Driving” anyone?

And you can bet that if that happens, a large part of the public including investors and policymakers will believe that we did achieve AGI.

Specific to this - possibly well timed given low birthrates are creating fewer next generation workers? Seriously. Massively increased productivity per capita might be the advanced economy lifeline (since the more obvious one of embracing immigration is being broadly rejected).

To the OP - agreed.

But two edged? There is no reason to believe that the patterns of information processing done by this take on AI replicate the patterns that human sentience emerges from. But since we still don’t get that, there is also no absolute assurance it doesn’t or won’t or that sentience cannot emerge from some other pattern… The problem remains that we have no way of knowing it when we see it.