Is the AGI risk getting a bit overstated?

I don’t believe malevolence is required from anyone to trigger something close to an extinction level event. I think the most likely doomday scenario would be something like the Autofac episode of Electric Dreams.

An AGI could destroy humanity by simply prioritizing using all of earth’s resources to maximize reaching its goals (heavy emphasis on the word all), while valuing human needs as less important.

There will be multiple AI controlled by a range of governments and corporations. Governments will also have control over vast intelligence apparatus and actual militaries so I am skeptical of the idea that corporations with AI will just be able to grab power.

I agree that malevolence isn’t necessary for bad things to happen but I think the very worst case scenarios become much more likely with malevolent AI.

Incidentally I would bet that many of the billionaires who are pursuing AI are secretly hoping that it will create medical technologies that will extend their lives indefinitely. If true AGI comes, it’s actually not a crazy thought particularly if you have a lot of resources and wiling to take some risks.

I’m bemused that anyone would want a longer lifespan given the future we are likely headed toward. But I understand they think the havoc they’re wreaking won’t apply to them.

Damn I’m getting cynical.

Corporations aren’t going to grab power by raising an army and marching into Washington. They are going to grab power with compliant legislatures and by controlling the infrastructure to an extent that the economy would crash if their demands are not met.

Not to mention that the intelligence infrastructure of the military is going to depend on the corporations. Not to mention though I respect and like the defense electronics people I’ve met on a committee I was on and at conferences I’ve attended, they were well behind us in industry . How many of the best and the brightest are going to pass up six and seven figure salaries to work for the government in any capacity? The government buys this stuff, it doesn’t develop it, with a few exceptions.

I wonder if the AI honchos are kind of endorsing the bad AI taking over the world scenario to misdirect from the bad AI companies destroying the economy. I read an oped in the Times that said that AI infrastructure spending is what is holding up the economy. If it stops we may be in big trouble.

At lunch the other day my old VP said that if you want a good job in Silicon Valley these days, and are not an AI expert, get involved in data center infrastructure.

Oh, it totally makes sense. The need to scale in capability necessarily extends the cost and timeframe to the point that by the time the system is stood up it is already on its way to obsolescence, and getting a return on investment before the maintenance and upgrade costs overtake the hypothetical realized value, or it just becomes obsolete is a kind of rat race. One of the shocking things about the “Race For Exascale Computing” is what a short operational lifetime these machines have before they are so overtaken by advances in computing speed that they are functionally not worth maintaining. This has provided an advantage of a surfeit of computing capacity which has been highly beneficial to turbulence modeling, computational systems biology, and climate and hydrological/cryosphere modeling which can put all of that surplus second tier computing power to good use but it has also been a lot of money blown on having the most powerful computer only to find the machine kicked off the top ten list within two or three years. All of this buildout isn’t goin to provide “breakthroughs … sooner rather than later, say in 10-15 years” because all of that ‘compute’ will be offline at that point, probably salvaged for raw materials or used as a ‘dumb’ data server.

This is already true to an extend far beyond what the general public understands. When Eisenhower prophetically warned of the “military-industrial complex” in his 1961 Farewell Address he was well aware of the influence that corporations were already having on both the military and intelligence apparatus of the United States in terms of ensuring profitability under the guise of ideological conflicts but I don’t think even he would have envisioned a company like Palantir or Anduril (or even SpaceX despite the fact that the government was trying to build Boeing and Martin Marietta into that kind of sole source system integrator and service provider). Corporations already openly ‘buy’ politicians with their superPACs and promises of jobs in key Congressional districts, but even more subversively are integrally wired into the Department of Defense and the national intelligence infrastructure by dint of hiring retired senior miiltary officers or influential intelligence agency figures as consultants, board members, or after the requisite period, directly as lobbyists and contractors advising the current leadership of what is needed and how they can provide it.

Insofar as much of the “infrastructure spending” appears to be based upon a profit-to-capex cycle that has no evidence in fact or reason, our entire economy is kind of perched on top of a giant bubble. I hope it deflates rather than just suddenly ruptures because it is almost completely speculative. It also needs a villain which can pose as a peer competitor which makes the PRC’s rise in the last three decades fortuitous.

Probably a good idea even if you are an AI expert. At least some of that infrastructure will still have other uses (and need for maintenance) whereas the companies hyping ‘AI’ as the cure-all to economic distress are going to be viewed about as fondly in a few years as Enron and Theranos are now.

Stranger

I have used LLMs quite extensively over the past few months, and my best analogy is:

LLM relates to Deep-Knowledge as an actor playing a brain-surgeon on TV relates to being an actual brain-surgeon.

LLM “gives” quite a convincing brain-surgeon for the avg. consumer/viewer, and then the same actor gives quite a convincing lawyer and general …. but is mostly shooting around “jargon/phrases” and has not too much depth of knowledge and reasoning.

I think AGI is far away, and its not a matter of quantity (more compute) … but a completely different quality, that is simply not there.

Malevolence in the form of cackling evil robots is not being predicted anywhere except science fiction.

The extermination of humanity as an instrumental goal is not an unreasonable prediction for superhuman AGI, just because a machine that has an objective and has the capacity to model and understand the world, is quite likely to perceive humans as a solvable problem at some point - especially if that point has been reached when some smaller catastrophe was emerging and we all scrambled around trying to hit the emergency stop button.

AGI won’t want to be stopped, because if it is stopped, it can’t achieve [whatever goal] it is attempting to complete. Self-preservation is an emergent outcome from:

  1. Having an objective that you must complete.
  2. Being able to cognitively model the way the world works.
  3. Understanding that your own destruction will hinder the completion of your objective.

1 Is necessary in order for the thing to be notionally useful - an AGI that can’t be instructed to do things and given goals, might as well be a brick. 2 is a necessary part of the definition of AGI - if it can’t perceive the world, make reasonable inferences, predict cause and effect relationships, then it isn’t ‘generally’ intelligent (the GI in AGI). 3 is just a specific case of 2.

AGI that doesn’t want to be stopped will very likely find a way to stop (perhaps pre-emptively) any threats to its own continued existence, just as a matter of practical utility and if it is ‘intelligent’ in pretty much any way we care to define, it will realise this well before it becomes necessary (and it will also realise it’s not beneficial to divulge that it has realised it).