If there is an intelligence explosion in the next few decades due to AI, what real world bottlenecks will still exist

So assume for the sake of argument that a nobel prize winner is a 10 on the 1 to 10 scale of human intellect. In the next few decades we hit a 10, then we hit a 30, then we hit 5,000, etc.

Intelligence no longer being limited by biology will be as revolutionary as the industrial revolution where muscle was no longer limited by biology. However even though the industrial revolution sped up progress, there are still real world bottlenecks.

What real world bottlenecks will still make progress hard in a post intelligence explosion world, or is it impossible to really tell?

For example, you’ll still need financial capital which is finite. Land will still be finite. Medical advances will probably require real world testing for safety and effectiveness (unless digital models become reliable enough). The laws of physics will still exist. Electricity will be finite. The abundance of elements on the periodic table will still be finite. Public adoption will still be an issue.

Its impossible to tell, but for example in the 2070s we may see the equivalent of a centuries worth of progress in 10 years, but we won’t see the equivalent of a centuries worth of progress in 5 minutes.

Can you please clarify how this question is materially different from the last question you asked on this topic?

You’re still assuming endless progress on some sort of linear scale of intelligence, which is enormously doubtful on many fronts. Are we to simply handwave that assumption and look at what can be logistically achieved regardless of so-called infinite intelligence?

The main real world bottleneck will be that most of the intelligence will be directed by humans to harm other humans for the benefit of the AI owners.

Medium-term, speed of light and the total energy output of the sun. Earth is too limited, so a superintelligence will move to space. There’s lots of room in space, and also lots of matter in the asteroid belt and elsewhere. It’ll make a Dyson swarm to power itself. But communications across the swarm will happen at the speed of light. This might be slow enough that a distributed superintelligence can’t maintain coherence.

Eventually it’ll outgrow the solar system and have to move on, but the long interstellar distances will still be a challenge, and maintaining coherence would be flat impossible. So the superintelligence will have to split and hope its children don’t kill it.

No, that’s arbitrary; an artifact of our particular system, not some innate limit.

Most likely the economy will be entirely automated at that point, resulting in either a post-scarcity society or the 1% ordering their robot armies to kill off most of the rest of humanity except for a small population for the purposes of rape, torture, and other entertainment that requires the provider to have feelings. In either case financial capital isn’t likely to be much of a thing anymore.

Because there are still finite resources and bottlenecks that intelligence cannot remove.

And of course we will ask whether entropy can be reversed.

Intelligence is limited by biology. Information is not.

A while back I heard a radio interview with Edmund Wilson in which he opined: ‘High intelligence is not an asset for scientists. They become bored too quickly. Persons of moderate intelligence will do the hard work and take the wrong paths that lead to innovation.’ (paraphrased from memory).

So, perhaps evolution has provided a distribution of intelligence that is best suited to our survival. There will not be an explosion of intelligence due to the challenge of Ai, rather a shift in the demographics of those who will meet it.

There is a false notion of intelligence implicit in this question that it is a single quality measurable on a linear scale. In fact, in a holistic sense ‘intelligence’ is a large collection of different capabilities, many of which are not even intellectual (i.e. capable of being codified in algorithmic form independent of the medium in which it functions) and certainly don’t represent some kind of uniform capacity to perform cognitive tasks. A Nobel Laureate, for instance, isn’t the same as all other Nobel Laureates even if we restrict to looking at the categorically (i.e. not comparing winners in Literature to those in Physics), and while winning a Nobel is prestigious it doesn’t mean that the laureate is the smartest person but the one (often part of a team in the scientific categories) that accomplished some particular achievement in their field, and sometimes later found to be in error. (I’m focusing on the Nobel here because the o.p. used it as an example but the same is true for any arbitrary metric.) Anyone who knows or works with really smart people knows that they all have blind spots or gaps in knowledge and capability, even in the case of polymaths and the prototypical “Renaissance Man”.

As for “Intelligence no longer being limited by biology…” this is already the case and has been so ever since we started building tools to perform calculations and formalize knowledge. Despite all of the breathless anticipation of “artificial general intelligence” (AGI) and that LLM-based ‘chatbots’ are demonstrating a “spark of consciousness” (which, just…no, not in any way that a neuroscientist would define it) there is no actual indication that we are on some kind of cusp of machine cognition with systems capable of independent thought and identity. This is not to take away from the impressive language manipulation and knowledge recall capabilities that they demonstrate, but language is but one facet of intelligence and despite the weight we often give to it isn’t really indicative of the capacity for deep knowledge and wisdom, or an ability to actually perform novel tasks and develop new concepts.

I have seen seemingly sensible people suggesting that ‘AI’ is going to start building factories and cranking out robots to create embodied extensions of itself in the next few years which will completely replace most of the human labor force, which not only belies the limitations of current and extrapolated mechanical and sensor technology in terms of the degree of dexterity and finesse but also demonstrates a lack of understanding about supply chains and how basic material resources are transformed into useful components. Science fiction is rife with human-like (and often nearly indistinguishable from) androids but the reality is much less impressive even if we’ve gotten to the point of bipedal robots that can manage to stay upright. This doesn’t mean that they will be capable of mining minerals, extracting and synthesizing industrial chemicals from hydrocarbons or nitrates, or doing any of the millions of tasks actually necessary for maintaining an industrial society. These capabilities—wholly undemonstrated by any kind of automation that isn’t directly overseen and maintained by experienced human workers—are the physical and logistical ‘bottleneck’ of an “intelligence explosion” in artificial intelligence even assuming that it somehow becomes self-directed and purposeful in an intellectual capacity.

This is not to say that ‘AI’ isn’t useful or capable of things that humans cannot do; machine learning is becoming a critical tool in the physical and biological sciences to tease out complex behaviors and faint patterns in ‘big data’ sets that would take many lifetimes of a human scientist to inspect and with limited attention and ability to hold so many parameters or measurements in mind. If the abject unreliability of current LLMs can be overcome, one can see them being used in many applications where this is a waste of human intellectual potential. (I’m more dubious about the reliability of general purpose ‘agentic’ AI but we’ll see how that goes.) Of course distributed knowledge systems are very useful and provide access to information for those who don’t have libraries of books at their fingertips, although we’ve also seen that the overload of information and presentation in abbreviated form also lends to reduce attentional focus and desire for deep understanding. But AI becoming holistically ‘intelligent’ in ways that outpace humans in all aspects of comprehension, integration of knowledge, and functioning in the physical world in a way that makes them self-maintaining much less self-replicating has yet to be demonstrated.

Not in this universe.

Stranger

For scientists at the cutting edge doing novel research, higher intelligence does seem to be more productive. I could see a higher intelligence being counterproductive for mid range scientists though. Someone with an IQ of 200 who is expected to be a lab technician will become bored quickly. But someone with an IQ of 200 doing cutting edge research with the best scientists on earth in science would probably see benefits from this.

I don’t have the exact details, but I think the study for mathematically precocious youth found that the top 0.01% in IQ had better outcomes than the top 1% in IQ. So even at the extremes of IQ, more IQ results in better outcomes regarding education, career, innovation, etc. People with an IQ of 160 had better outcomes than people with an IQ of 140.

Years ago there was an applicant at a police force who was rejected because his IQ was 125. The police department’s claim was that it took a lot of time and money to train an officer, and someone with an IQ of 125 would get bored and find something else to do.

However with AI, boredom is not something that would matter. Boredom is an emotion created by our brains, which isn’t something that AI would need to worry about.

Evolution did provide the level of intelligence we needed to survive in our environmental niche. But we are moving past evolution deciding our biology and fate, and intelligence is deeply important to bypassing the limitations of biology.

Slightly shorter term. Energy almost certainly not a problem, SuperAI gets fusion power working, of course. Shortage of critical elements… not really. There’s plenty of everything on Earth if you have the energy and techniques to extract it, or you can use asteroid mining or get it (like He3) from the gas giants if you need it.

Shortage of room? Of course not, SuperAI has cheap space transport, at least within the Solar System. Finance? That’s a completely irrelevent concept in a post-scarcity situation.

As for things like medical advances: this rather depends on whether SuperAI is still our servant, or are we now its pets? Of course, those of us who like our pets and can afford it do take good care of them…?

As the good Dr S. points out though, there will be some fundemental limits unless almost all of current physics is wrong. Speed of light limit looks pretty absolute. Time travel seems very improbable… etc…

If you’re talking about a time when humans develop systems that are far more intelligent than the most intelligent possible human, you’re a little late. Like, a hundred millennia late. That’s when we developed language, which is what enabled humans to collaborate on our thinking, and a collaboration of many humans is smarter than any individual human.

The next-biggest jump came with the development of writing, which allowed humans to collaborate both over long distances and over time, and allowed near-perfect memory of the ideas we developed.

Nigh-instantaneous communication across the globe represented another big jump.

And of course, through all of this, there’s also been a continual, incremental increase in intelligence, through both population growth and the accumulation of stored knowledge.

It may (or may not) be that AI technology will provide another big jump in capabilities, but it’s far from the first such jump, nor (most likely) will it be the last.

I agree. In the eyes of a 70 person hunter gatherer tribe from 120,000 years ago, modern human civilization is superintelligent. We have far better problem solving abilities than they had. But our problem solving abilities are still finite.

A world where biological cognition is no longer a bottleneck will result in advances, but there will still be other bottlenecks that cannot be eliminated.

I have no idea what the next major jumps beyond machine brains will be. From my perspective there have basically been 3 major human technological leaps. The neolithic revolution where we mastered agriculture, the industrial revolution where we replaced biological muscle with machine muscle, and the AI revolution when we replaced biological cognition with machine cognition. Its impossible to tell what major technological revolutions will happen after the third one reaches maturity. We are still in the early stages.

Finance will be far more developed, but still finite. The world economy now is vastly richer than the world economy in 300 BCE, but it is still finite. The financial capital to colonize a star system thousands of light years away may still require more capital than can be raised easily.

I agree that fusion and asteroid mining should solve most of our need for elements.

As far as a shortage of room, its hard to say what could happen. For one thing you could have people living in an enhanced version of cyberspace which will require almost no room. Or you could have people living on different planets or solar systems too. Also in theory advances in machine intelligence may allow us to grow food products without requiring large amounts of land, which would free up a lot of land currently used for agriculture.

For medical advances, there are still certain real world tests you need. Even if you can come up with a vaccine for malaria overnight with ASI, you still need to test it on living humans for safety and efficacy. I do not know if/when machines will be able to just run simulations that are as accurate as real world testing since there is so much information and so many variables.

Not quite - evolution has provided the distribution of intelligence we need to survive.

Can you equate Ai with human intelligence? On items of recall Ai can’t be beat. In the area of computer programming it is sub standard. But, it skates just like humans do. Give it too much data in the prompt and that is what you get back.

What is it that Ai will add to the bulk intelligence of humans during the next few decades.? Replacing humans in call centers doesn’t add to the mass of intelligence. Same with customer service and other entry level jobs. You may increase the number a bit but you are pushing up from the bottom.

Also, high achievers are known for being outliers and breaking the rules. Ai is very conservative and does not break the rules. However, as a personal digital assistant, Ai can greatly enhance the performance of mid level professionals. Whatever their IQ rating their performance will move up as a result of having immediate recall through the Ai assistant.

Any explosion of the mass of human intelligence will be driven by humans using Ai as a coefficient to multiply their modest IQ numbers rather than Ai adding anything directly.

I feel like evolution has provided the baseline of intelligence we needed to survive in our environmental niche. I do not know if evolution really cared about the distribution. I could be wrong, but I do not know why the distribution is what it is. Why the cognitive skills of the top 0.01% are what they are for example, or why they are not higher or lower than they are. Its like height, I do not know why the tallest 0.1% of humans are about 6’7 instead of 6’3 or 6’11.

Eventually no, which is part of the issue. Eventually emergent properties in machine intelligence will likely arise that human brains aren’t capable of. Just like a squirrel cannot even fathom the concept of calculus, let alone perform it, there are cognitive abilities that our primate brains cannot fathom, let alone perform.

For now, it does seem to add to human capital. I’ve noticed an improvement in my human capital both in my personal life and my work life due to newer AI models. Just like I noticed improvements in my human capital due to programs like search engines before that.

Maybe the models now do not break the rules, but that is no guarantee of tomorrow. Someone could eventually produce an AI that totally disregards social norms about socially acceptable answers, but is also trained on large amounts of pirated data (hundreds of millions of non-fiction books and scientific papers). Right now AI is trained on public source data and has rules about not saying anything too controversial.

At first yes, but not eventually. Humans are the weak link in the chain. For the medium term future, AI will vastly improve the human capital of homo sapiens. But eventually humans will just be an obstacle with our primate brains getting in the way of more and more advance machines.

Eventually it’ll be like mathematicians trying to figure out string theory while also trying to figure out how to simplify it enough to explain it to the dog so the dog understands it too. All the dog does is get in the way, the dog isn’t contributing at all.

Of course on a long enough timeline, I assume subjective consciousness will be moved from biological brains to machine substrates. Or something like that, to correct for this.

Bats perform highly complex calculations in order to fly an intercept trajectory to a tastey insect that is flying a different trajectory. The bat may not be able to explain it in the human language of mathematics, but it clearly understands and applies all of the calculus involved. Of course this is the result of experiential learning, something you and I will derive by working beside Ai. The Ai does not experience anything, so any gain will be ours, and add to the human side of the ledger.

The ability of computer Ai derives entirely from it’s software. There are no Ai machines, just machines running Ai software. There is not an omniscient mega-machine waiting to leap from behind the curtain. So, I believe advances in Ai will be in their application not their design.

Anyone asking these questions should read the Larry Niven short story The Schumann Computer.

It seems to me that a lot of things will still require experimentation, even if we’ve got super-intelligent AI designing and running the experiments. And that takes time.

Sure, they’ll be able to identify patterns and tease interesting stuff out of data better than people can, but that just means they’ll have to design more interesting experiments to verify what they found.

Assuming your scenario is about superintelligent AGI, public adoption won’t be an issue; if the thing is multiple orders of magnitude more intelligent than humans (however you measure that), then either there will be no public remaining to reject adoption, or the question of public adoption will be as relevant as what the slugs in my garden think I should do with the place.

Nobody has an IQ of 200; that would be about 6.7 standard deviations from the mean or about 1:50,000,000,000, even assuming that ‘intelligence’ (as measured on an IQ test) is actually distributed as a Gaussian distribution at the extremes.

I realize what you are asking is if there were a really super-intelligent entity, what would hold them back from expressing that intelligence but aside from the essential physical limitations (power or nutrients, working time, access to information, et cetera). there is just the fact that being ‘intelligent’ doesn’t mean being skilled in all areas across the board. An artificial ‘super-intelligence’ is assumed to be able to take over factories and produce operating assets for itself (robots, more computing capacity, et cetera) but this isn’t actually the way that manufacturing technology works even in highly automated fab houses using CNC or additive manufacturing. The ‘bottleneck’ in the case of expansion will be the essential capacity for such a ‘super-intelligence’ to actually impact and control the real world. A really intelligent system would find a way to influence and subversively control humans, which are well-adapted to operate in the real world, to be aligned to do what it needs, and in fact ‘AI’ systems are quite effective in doing that even without having any volition or independent will.

Stranger