Is the AGI risk getting a bit overstated?

I’m sorry for mischaracterising your arguments.

This isn’t quite the point I was making, but as I mentioned upthread, these problems are philosophical and are pretty much independent of whatever technology may be implemented - they are problems in communication, control, goal-setting and other abstract matters, that will have application of some kind regardless of how we build things.

The same philosophical problems do also have application to interactions between humans, but the reason we don’t there normally see runaway conditions everywhere is that humans are typically fairly evenly-matched in capability and power, including being occasionally fallible. In cases where there is a significant mismatch of capability or power, things easily run away into scenarios like slavery, genocide, fascism, etc.

You are talking about humans. You can take it as read that they are. Perhaps we should be rooting for the AI in this scenario.

I can readily see why all of this would capture the imagination of nerds the world over - I love science fiction, too. But I haven’t seen a shred of evidence we are anywhere near this. I don’t even know how we can convincingly argue it’s even possible.

Seriously, I have been trying to dig into this mess to see what’s really there, and the deeper I dig, the more it seems this is all built on a house of cards, society’s hopes and dreams, paranoia and philosophical abstractions but not actual evidence of anything and definitely not a solid business plan.

I mean we realize the data centers are driving us faster and deeper into the climate crisis, right? It feels self-evident to me that AGI and resolving the climate crisis are mutually exclusive concepts. Well the latter will certainly undermine the former.

Yeah, but there is a lot of that in the SiVal biotech sector, especially with respect to genetic modification and life extension with promoters hyping the supposed results that are at best wildly exaggerated and sometimes completely fabricated. Theranos was kind of unique, though, in that they were providing a diagnostic service which could be compared to actual standard lab blood panels and could be used to assess serious medical conditions. They weren’t just screwing investors; they were actually putting companies that would have been using them in liability and actual patients at risk.

Stranger

How dare you criticize my religion and the coming of techno-jesus who will solve all our problems for us as was foretold in the prophecy of Kurzweil.

To me, I can’t see how humanity can avoid AI that is better than us. Computer science didn’t exist until the 1940s. The field of AI didn’t exist until 1956. It may not happen tomorrow, but eventually we will develop advanced enough hardware and software to develop AI that is more competent than biological brains in everything, the same way that we’ve developed machines that are better at activities requiring biological muscle.

Humans are just chimpanzees who spent the last 3 million years tripling our brain size. As a result we have a global civilization while chimpanzees live in the forests scared of thunder. Our 16 billion cortical neurons are not that impressive compared to what is potentially possible.

You have to keep in mind this is a debate among world leaders in AI. Some thing its years away, some thing its decades or more away. But for the most part they all seem to think its coming sooner or later that machine cognition will surpass biological cognition in essentially every area sooner or later. Biological brains aren’t getting better with each generation, but machine cognition is constantly improving.

But even if AI doesn’t lead to massive changes quickly, that doesn’t change the fact that it will serve useful purposes in society. Nobody expected facebook to lead to the arab spring, and nobody expected cell phones to be used by African fishermen to find the best markets for their products. Nobody expected online banking to play such a major role in remittances in the developing world.

As far as climate change, right now data centers make up less than 1% of global emissions. Its a concern, but I don’t see why it would necessitate abandoning AI research. The world emits more CO2 on air conditioning.

Serious AI would almost certainly not be such huge energy-guzzling boondoggles in the first place. Real AI isn’t going to be a glorified chatbot that requires massive amounts of energy to do a bad job. In fact we already have proof-of-concept; the human brain is much smarter than this so-called AI, and uses less energy than a light bulb.

As it happens I agree with you that we aren’t near real, human-equivalent AI, much less superhuman AI. But barring a collapse of civilization I consider it inevitable; we are proof human level intelligence is possible, and I find it highly implausible that we just happen to be the most intelligent beings that the laws of physics permit. That just smacks of human egotism.

The assumption seems to be that they will improve indefinitely, even though we’ve already seen recent evidence that scaling up doesn’t always lead to massive improvement, right?

I don’t think we have the kind of future where we can devote that much real estate, resources, electricity, power and water to AI data centers and still have a functional society. That’s a big piece of why I’m so skeptical about this. The business model appears to be financially unsustainable on its face, and the expansion of these data centers is just getting started. I don’t know if you are aware of how controversial they are because of the local environmental impact on resources and infrastructure, the costs to residents, the massive water and power consumption required and how this affects communities. Right now in my state, my alma mater is planning to install one of these data centers in a neighboring community without permission from the community, without community involvement of any kind, regardless of the impact this will have on the community and they are irate.

This article discusses some of the general issues here.

That’s not even getting into the climate crisis. All of these local issues will be exacerbated by global warming.

I seriously doubt we have the capacity to dump even more resources into this without crippling our own infrastructure. And the more resources we put into it, the worse those issues are going to get.

And these are the same people who are going to redefine AGI as soon as it benefits them financially. They won’t be the ones whose pronouncements about reaching AGI I trust.

I understand pretty fully how computers work. I understand programming at a very low (and high) level. There are at least tens of thousands of people who have a similar or greater understanding in that area. While I’m not a neuroscientist, my understanding is that even the very best among that class are no where near understanding how the brain fully works and interacts with the various inputs and outputs of our body. We don’t yet know if we’ll reach true AGI without having that understanding, so the fact that human brain evolution has stopped or slowed to a crawl doesn’t necessarily mean that computers will pass it any time soon. I’m still of the opinion, for what little it’s worth, that several decades is the minimum amount of time before we get there and I wouldn’t be shocked if it took over a century. Yes, even with our extraordinarily fast technological progress.

As an aside, there is a highly controversial theory known as Orchestrated objective reduction (Orch OR) that would mean that our neuronal microtubules can handle quantum processing. If that turns out to be true, the technological barrier to brain simulation just got a shit-ton harder.

This assumes that AGI is a boolean state and appears instantly (or nearly so). What if it is achieved slowly over decades with several ‘AI winters’?

There isn’t a common definition of AGI, but for this topic I take it as a short-hand for “when there are AI systems that are advanced enough to act with malice or extreme negligence that they could endanger humanity”. There’s no guarantee it will happen or even can happen. Why should we worry about it?

If we consider the moment that the AGI starts it’s plan – what does the world look like 1 year earlier? 5 years? 10 years? There are a lot of legit issues leading up to that point that are more likely and more concerning : the environmental impacts that @Spice_Weasel raised or the socio-economic ones that @msmith537 raised.

I don’t think there is a general consensus among the scientific community that AGI is an existential threat, but instead that we are not putting systems in place now to deploy AI responsibly (regular or AGI). There isn’t a more succinct example of this than OpenAI shrugging off their watchdog charter to go all in as a for-profit.

Even if it takes a century, it’ll still happen. That’s my point. Whether its 5 years or 10,000 years, eventually machine cognition will vastly surpass biological cognition. Will I live to see it? I have no idea. But its going to happen, I don’t see how its avoidable. There are too many rewards from advanced machine cognition. Advanced machine cognition will dramatically accelerate advances in science, medicine, military, economics, etc. The world GDP grew at 0.1% a year before the industrial revolution. Now in some developing nations, GDP can grow by 10% a year.

As far as Orch-OR, I don’t know if that really matters. A jet plane doesn’t work the same was as a bird, but it flies far better than a bird. A sedan doesn’t move the same way a horse does, but its a far better method of transportation. The idea that reverse engineering the brain is the end goal isn’t something I agree with. Cognition and problem solving may involve architecture totally different from evolution inspired neuroscience. The idea that natural selection’s biological processes are the best that can be designed in the universe is very limited in scope.

About 10 years ago, Deepmind was learning to play video games. Now its winning nobel prizes for discovering information about 200 million biological proteins. People like myself weren’t watching Deepmind play video games in 2015 thinking that was the end goal. We knew that this was just training until it could do something more useful.

Even if LLMs level off and don’t get better, that doesn’t change 2 things.

  1. LLMs are just one form of AI. Newer, better, different forms of AI are going to keep being discovered and invented. The hardware to run them will also keep getting better.
  2. We have barely scratched the surface of what LLMs can accomplish, since we’ve only really had them for a few years.

I’m not sure if better is the word I’d use. Faster, certainly.

In an apocalypse, I’d much rather find a herd of wild horses than a sedan.

What if we figure out how to pull this off in theory, but it requires more power than we can provide? There are way too many unknowns to start making absolute statements. Right now, we’re still in the phase where most of the pieces missing are what we call unknown unknowns.

Sure, but we knew how to achieve those goals with Deep Reinforcement Learning long before we had the computing power to pull it off. Deepmind wasn’t something novel, just something powerful. Same thing with CNNs. We knew the kinds of things we could do with them long before we had the computing power. We have no idea what the model(s) that reach AGI will look like. At all.

I completely agree with this.

I’m not convinced that this is true. Hell, we already have to supplement them for things that should be relatively trivial. ChatGPT can add 145779857123 + 1987958132759 for instance, but only because it farms that out to a standard math program. An 8 year old can pull that off with a number 2 pencil. LLMs are quite limited in what they can and can’t do, and knowing how they work, I’m not seeing how they overcome such limitations. If you mean “that giant thing that we call ChapGPT that also includes many models, modules, and hard-coded edge cases, external to the LLM because LLMs are very limited” then sure, they’ll improve.

Neither malice nor negligence are the scenarios that are especially implicated by the alignment problem and the malice one is just an SF trope really.

What seems more likely to me is exceptional competence and efficiency, with the threat instance arising from an observation (by the AGI) that the whole thing would really work much better without humans getting in the way all the time, impeding the goal and squandering the resources needed for the objective.

Because with any risk, ‘it might happen’ is what gets us thinking about avoidance and solutions, whereas ‘it might never happen’ is what gets us to ignore the risk until it is too late.

Too late to edit, but to add - the key reason for this is that the majority of effort in commercial AI development appears to be targeted at the creation of optimisers - machines that will supposedly save us time, effort, money and the headache of hard thinking, by creating the most optimal (quickest, cheapest, smartest, minimal) solutions to any given problem.

Humans are not often the most optimal component of any scenario; the existential threat is that we are optimised out of existence.

I think there are enough real problems with competing and conflicting interests without inventing silly ones.

There’s also the fundamental question of how much do you want your life to be decided by a fucking machine! Countless stories from sci-fi (iRobot, The Matrix, Westworld, Idiocracy, Logans Run) all share a common theme in where everyday decision-making is all handled by AI.

To what extent do we want algorithms making decisions on jobs, careers, health care, dating, education, or other aspects of our day to day lives?

To echo what @Mangetout said - humans are inherently imperfect. That also makes us more interesting. I think few people want to live in a society where machines and algorithms are constantly evaluating them to become ISO-standard optimized humans.

I think that companionship would be a good market. LLMs are already filling that slot, but are not yet convincingly sentient. To many people, that is. It’s a good enough simulation that many are fooled. Enough to have a large market. But you can still have something that looks like a very large market with only .01% of the worlds population. What if 10% of the worlds population wanted a digital companion instead?

In order to be a convincing companion you need to know a modest amount about every domain a human would typically care about. AGI would do that. And would be much more adaptable than LLMs. And you’d need a very broad spectrum of knowledge since humans have diverse interests.

It’s possible that LLM chatbots will become so convincing that their market will become saturated, especially if they get paired with convincing robots, but it’s also possible that you need AGI to fully reach everyone who would want one.

Because scaling up a chatbot, just gets you a more expensive chatbot. ChatGPT and so on are about three-quarters scam, and a terrible standard by which to judge the potential or lack thereof of AI.

Interesting application and one that appears to be established by LLMs.

Does inserting AGI pose risks?

It’s not only about scale. In the long term, the relationship between the scale of computing power and the power of AI is that larger scales enable completely new AI architectures. So, while Large Language Model capabilities will undoubtedly start to plateau at some scale level, they couldn’t exist at all without modern computing technology. Even if someone had conceived the brilliant idea of LLMs back in the 80s or 90s, they couldn’t have been built. Even today we’re pushing the limits of the hardware technology.

Whereas I’m optimisitc, because even if Moore’s Law isn’t exactly applicable any more at the chip level, computing technology continues to improve. The first practical computers required a room about the size of a small auditorium and a corresponding amount of power consumption. Even in relatively modern times, I remember back in my college days an entire power substation had to be built on campus to power the academic computing center. Today there’s more computing power in a typical tablet or cell phone.

We already have evidence that using LLMs degrades cognitive capacity. The more you outsource your thinking to something else, the less equipped you are to make fully informed, rational decisions.

One of the major problems I have with LLMs is that they draw from a lot of inaccurate or at least controversial data. The best example I know of is the field of psychology, which contains some pretty well researched interventions and theories and some interventions and theories that are absolute BS. I don’t think the bot can really differentiate because the Internet can’t really differentiate.

So someone could look up psychoanalysis and learn all about psychoanalysis but never actually get the message that psychoanalysis is full of shit. Or psychodynamic therapy or polyvagal theory which is all the rage right now despite being directly opposed to consensus neurology.

(I just tested this with Google’s AI. I searched “polyvagal theory” and it was presented as fact.)

I’d like to see a credible cite for that, because I don’t believe it. What GPT has done for me is given me more information than I would normally have access to. My brain still works, AFAICT. Just like I can still do arithmetic in my head despite having a virtual calculator that pops up with the press of a function key.

Physical capabilities are a different beast. Due to a lifetime of keyboarding I can’t write worth a shit with pen and paper.