Is the AGI risk getting a bit overstated?

Maybe, but I kind of doubt it. A “working simulation of the brain” with sufficient fidelity to actually produce animal-like behavior is essentially going to be as complicated as the brain itself, and we are distressingly far from understanding how essential functions of the brain work at the level of organism behavior even in pretty simple creatures. I suspect we’re going to end up with something that may be a functioning brain but it will still be kind of a black box in terms of operating principles resulting in emergent behavior. The ‘soft physics’ of neurophysics is something that makes even hardcore researchers scratch their head as to where to start.

The desire for AGI is really one of people wanting to just eliminate the messiness of human employees (or for some, human relationships) at all. There is no other clear advantage to AGI; even the assumption of ‘efficiency’ from an agent which doesn’t need to sleep, eat, or take vacations is questionable if you are having to deal with an entity which may have nothing more motivating to do than outthink its supervisor. Purpose-specific ‘AI’ such as ‘deep learning’ tools already have very useful applications and we can only assume that will increase in many domains, but for those purposes an LLM will probably serve as just an interface for natural language control and communication, not the core reasoning or governing capability.

This begs the question of what “increased productivity per capita” even means in a society where there are few jobs? Will we distribute wealth more evenly or have a more egalitarian society where people are encouraged to follow their passions and interests than grind away at corporate jobs or serving overpriced coffee? Somehow, I don’t think that is the vision of most of the techbros trying to hype their AI-based solutions, and they certainly don’t seem inclined to share their extraordinary wealth with society or even pay their fair share of taxes for all of the infrastructure and education it provides to support their industries.

Stranger

Oh no idea. But on the society level fewer workers requires higher productivity per capita to not decline. Will that be a society with an adequate safety net or a dystopia with ever increasing wealth and power inequality? That is up to the next generation to decide and fight over.

AGI will be “5 years around the corner” for the next 50+ years IMHO.

IMHO the greatest risk we are seeing with AI is the tremendous amount of money, brainpower, energy, and other resources being devoted to a technology of dubious real value in terms of actually improving people’s lives and in many ways is making people’s lives worse.

If displaced workers don’t have any income to purchase products, what does “higher productivity per capita” even mean? That Jeff Bezos can buy more megayachts so he can have a flotilla?

Based upon current trends we’re certainly going to need nuclear fusion to power it, so it is at least ten years out for the foreseeable future.

You say risk, Marc Andreessen says “Becoming Technological Supermen”.

Man, these people are fuck’n weird.

Stranger

This. I really don’t see the fantastic claims and predictions living up to the hype. The faith that some people have in the revolutionary power of AI really seems like magical thinking or religion more than realism. That won’t stop them from dramatically worsening the climate crisis to fuel their investors’ hopes, or selling their tech to the government so it can hasten the rise of authoritarianism. AI is like third down on my list of Things Seriously Freaking Me Out Right Now, but it’s not generative AI I’m worried about, it’s what AI is doing to people right now.

Of course they are fuck’n weird. They’re all fucking dorks and nerds. And I don’t mean in some eccentric but likable smart oddballs with a bunch of eclectic interests sort of way. I mean in a narcissistic don’t really understand, like, or care about other humans only obsessed about their interests and their toys evil genius sort of way.

Marc Andreesen said “the myth of Prometheus – in various updated forms like Frankenstein, Oppenheimer, and Terminator…” Well Oppenheimer was an actual person and he himself said “now I am become Death, the destroyer of worlds” in recognition of the potential implications of his new invention.

While I am into new technology as much as the next guy and have more or less made a career of it, I also think it’s disingenuous for these people to not recognize the double-edged sword and unintended consequences of their grand achievements.

Ultimately Marc Andreesen is a venture capitalist and a billionaire. So his predisposition will be to support whatever course of action will make him more money and will largely be insulated from any negative consequences.

I suppose my questions if all this technology is supposedly making our lives “better”, why do we need to keep being told how much better it’s making our lives? Because I don’t really get a sense of most people being like “oh look how much more stuff I can buy now thanks to my secure and interesting job to put in my much nicer home where I watch all this much better entertainment while having all these wonderful discussions with smart, thoughtful people!”

What I sense is that most people are lonely and isolated, can’t afford stuff in the real world, feel constantly watched and monitored and marketed to, and are inundated with a cacophony of noise and bullshit designed to influence them to act in the interest of the top 0.01%.

One clear advantage from the viewpoint of the corporate leadership is that AGI wouldn’t be legally human nor have any rights, so you can use them as slaves. No need to pay them, treat them well or anything like that.

And wrong. From your link:

Techno-Optimists believe that societies, like sharks, grow or die.

That’s an ignorant and stupid thing to believe. And weird and revealing as a metaphor. All I see is a person you should not trust.
Thanks for the link! It goes straight into my favourites folder “AI, Bitcoin, Social Media & Amazon”, which is getting long, btw.

It’s telling to contrast 1994 with 2024. Two years with massive technical disruptions.

In 1994 there’s was unbridled enthusiasm and optimism from The Internet. Each day we were finding new ways it could do something useful. Reality was out-pacing the hype.

In 2024 we’re worried what our kids are going to do for a living while finding out GenAI is mediocre at a lot of things. The real innovation is that we’ve trained an improved generation of hype mongers.


Regarding the OP:

AGI is zero threat to humankind. It’s a fear that a computing system could develop malice towards humans when we already know humans can and will do this. Long before we get to AGI, humans will use pre-AGI to consolidate power, information, and compute and then extract everything they can from humanity.

It’s like climbing Mount Everest and worrying about getting stuck by lightning.

I don’t think true AGI is even required to realise this sort of risk - the current tranche of LLMs can be given what amounts to autonomy by prompting them inside a loop and giving them the ability to create outputs to real-world controls; there are multiple ways in which AI can do bad things to us - including out-thinking us (in the case of ASI that is genuinely way above our level of intelligence and cognition) or outpacing us (in the case of AGI that’s approximately our own level of intelligence but works faster or more in parallel than we can), but also in the case of regular old AI doing stuff where we can’t really see all of what is going on until it’s too late.

Even regular old LLMs have been shown to demonstrate self-preservation and hidden goals.

This isn’t how I would describe the threat of AGI and I think it plays too much to the evil robot SF trope. AGI doesn’t need to turn evil in order to be a threat to humans; it merely needs to notice that humans are an obstruction to the continued pursuit of its goals.

I mean, I agree with your broader point that we might mess up in other ways first.

Leopold Aschenbrenner who worked at OpenAI feels that we may reach a point where AI can perform at the same level as a top level AI researcher. When that happens and we have billions of AI as competent as world class talent working on the software, we could see some pretty dramatic growth in AI capabilities in about a year or so. Even if the hardware remains the same, the advances in software could lead to pretty dramatic growth in capabilities.

I was just thinking that. I started my career in the mid 90s working in and around the Rt 128 loop in Boston, which at the time was a tech hub rivaling Silicon Valley or New York. There was really a sense for how this new technology was creating new jobs and entire businesses.

In contrast, the main benefit to GenAI seems to be eliminating high paying jobs and generating low-quality hype slop.

Perform what though? Capabilities to do what exactly?

I get using AI or advanced machine learning for stuff like drug research or optimizing traffic patterns or supply chains or whatever. But unlike the internet, I’m not seeing a whole lot of entrepreneurial benefits to the average person. Even if I use AI to create my “virtual C-suite” what fucking product am I selling?

Probably why 90% of AI businesses are failing.

The point of course was that in the scenario there aren’t many workers to hire let alone displace. Fewer and fewer every year as older workers retire or die and there is less than replacement level new folk reaching working age.

In that scenario societal level productivity, including ability to care for the greyer group, decreased unless productivity per capita massively increases.

Personally I am expecting that these tools will increase productivity but do not expect the massive increases that the hype promises. Therefore I see demand outpacing supply for workers as the more likely problem.

Why, you are selling AI services to other virtual C-suites so the owners can reap the profits and lounge around in a drift pool getting soused on fruity daiquiris while the unsophisticated masses toil away in the fields for their dole of processed mycoproteins and lentils.

Stranger

I agree and I believe it has been the case for some time. But it puts equal significance on experience an education. The demand is for two years of college and 4 yrs experience. Tough to get in our current society. There used to be hobbies and clubs that encouraged learning and opened opportunities. But today it is all buy and fly. I got experience by volunteering at the California Academy of Sciences Steinhardt Aquarium. That led to my first job out of school. Today those opportunities are monetized. Canned glossy summer ‘camps’ that you pay for and never get close to hands on.

A side effect of this is that my grand kids are focused on things like vaping and gender identity instead of building a hot rod car. How do you get experience in AI?

I very much doubt our population is going to crash enough to make replacement of people by AIs essential. Now, there is one job segment where AIs don’t work very well - nursing, child care, maybe teaching. But our society does not put a premium on any of these. If salaries in these areas start to increase, we might have a chance.

Sure, this is not the only way, but it does allow us to examine what is going on in more detail. This is kind of the ultimate mind-body problem experiment. If we replicate the body, in this case the brain, and get a mind we can be somewhat assured there is no external force acting. It might show us how we can redesign computers to have the characteristics that give us mind.

It would be more complicated than the brain, since we’d need to put instrumentation in. But we obviously are not going to build one big brain simulation program. We’d build hardware with digital representations of physical brain components instead of the standard cell libraries. Sure they are basically analog, but I’ve seen enough papers about digital implementations of analog functions like radio to know this isn’t a problem. The multi-level logic the brain uses is good also.

It is going to take a lot of work as we find that our simplistic ideas of how the brain works don’t make a working brain when implemented.

It won’t be a black box, or no more than a modern microprocessor is a black box, because we can instrument it. It won’t work as fast as a real brain for sure. And we’d of course do simpler animal brains first.

In the long run, any capabilities that require cognition will be something AI will be good at. I’ve already used AI programs to help me interpret medical documents, to help me understand a family member’s mental illness, to help me with tech issues, to help me with work projects, to help me understand my own medical issues, etc.

Even if there is a boom/bust cycle of AI like there was in the late 90s with the internet boom/bust, that doesn’t change the fact that over the long run AI is going to keep getting better and better. In 2019 AI could barely form coherent sentences, now it is competing with and outperforming people with doctorates in many areas.

I have no idea what role AI would play in entrepreneurship. I’ve never used it for that. But its possible that by the 2030s, AI will be to us what we are to chimpanzees. Even if that doesn’t happen in the 2030s, it’ll happen eventually. Even if LLMs hit a ceiling, that doesn’t mean that we won’t eventually find some other architecture that is vastly superior to LLMs, transformers and deep learning.

Couple of questions:

Some have pointed out that if AGI has 90% the same training set as AI then all you get is a better search engine. Maybe great, but not earth threatening. Can AGI be created with the same training set?

AGI requires experience. It needs sensors that reach into the grease and dirt. It needs the ability to influence and test operation of physical systems. It needs to test components to destruction. Can this process be forced or will it be the natural outgrowth of application and development of current AI?