Does intelligence stop mattering for problem solving beyond a certain point

So with advances in AI, people like Kurzweil have always predicted AI would reach AGI status around 2029. He predicted AI thousands of times smarter than humans by around 2045, and endless trillions of times smarter than humans by 2099.

However, the universe we live in is finite in its complexity. Eventually you reach a point where intelligence is enough to solve pretty much everything, and then other bottlenecks that can’t be overcome like the laws of physics come into play.

Take horsepower as a comparison. A horsepower is roughly the sustained power a horse can put out.

If you need a lawnmower, you can get by with a 10 HP lawn mower. For most residential homes, there is no real need for a lawn mower with more than 30-40 HP.

For a car, generally 200-300 HP is sufficient. Cars only need to be able to maintain sustained speeds of 80-90 mph and still have enough capacity to pass other cars.

I think the largest amount of horsepower any machine ever built has ever had is about 110,000 horsepower to move a large cargo ship.

You can make a lawnmower with 200 HP, or a car with 2,000 HP, but when you consider that those machines exist to accomplish a goal, anything above a certain HP just becomes meaningless.

It would be like saying you need John Von Neumann or Paul Erdos to understand pre-school level math. The extra intelligence they have isn’t necessary because the problems aren’t complex enough to require the extra intellect.

In engineering, I don’t think anything NASA has ever done has required more than 15 digits of pi. Calculating the size of the observable universe down to the planck length would take about ~60 digits of pi. But we can calculate pi to 300 trillion digits, despite the fact that in the observable universe there would never be anything that would ever require more than 60 digits.

I feel like intelligence is the same way. Maybe as a scale, a squirrel is a 2, a dog is a 3, a chimp is a 4 and a dumb human is a 6. A smart human is a 7 and a genius human is a 9. Well and good. But problems are finite in complexity.

With medicine, there are about 60,000 known conditions. I’m sure we will discover more, but human biology is finite in its complexity. Maybe an AI doctor that is a 25 on the scale I listed above is about all you need to understand biology. You could build an AI doctor that was a 4,000 on the scale, but the complexity of human biology only requires a 25, considering that a human genius is about a 9 and a dog is about a 3.

If you need to engage in galactic scale engineering, a 25 won’t cut it. Galactic scale engineering requires far more intellect, the same way moving a cargo ship requires more HP than mowing your lawn. But again, its finite. The rules of physics and chemistry that control known reality are finite in complexity so wouldn’t that make the practical laws of engineering also finite?

Like with pi, you can calculate it to 300 trillion digits, but for practical purposes you’ll never need more than 15 digits, and there is literally nothing in the universe that could possibly require more than 60 digits.

Granted we could live in an infinite omniverse, and there is infinite complexity in it which would require infinite intelligence to comprehend and master.

But within the known universe we live in, the complexity is finite. The laws of physics and chemistry are finite for practical purposes. The matter and energy of the universe is finite.

When you manufacture bolts and screws, they don’t need to be accurate down to the nanometer. Even if we had the ability to manufacture them to that specification, pretty much anything more accurate than 0.1 mm becomes irrelevant for practical purposes. A screw may need to be 6mm vs 5mm, but it doesn’t matter if a screw is 6.005 mm vs 6.006 mm. Screws are only measured down to a tenth of a millimeter I think.

I feel like with the laws of physics and chemistry being finite in complexity, there being no practical purposes of math beyond a practical point (like pi, even though pi is infinite), and the universe having a finite level of matter and energy, that there is going to come a point where more intelligence doesn’t really solve anything. Even if we discover how to add more FLOPs to training models of AI, it won’t result in an AI that can do anything other than do the equivalent of calculate pi beyond 60 digits.

Granted all of this is a long ways away. But I can’t forsee any situation where intelligence, beyond a certain point, becomes relevant. When universe scale engineering projects become as difficult for an ASI as tying your shoes is difficult for a normal person, what does extra intelligence accomplish?

Also, FWIW, just because a machine intelligence is capable of universe scale engineering doesn’t mean it can actually do it. If the speed of light is immutable, then it doesn’t matter. Its more my point that theres no real benefit of intelligence beyond the intellect necessary to do things like that.

There is at least one task for which more intelligence is always valuable. That task is competing against other intelligent beings. At any intelligence level, as long as there are at least two beings in the Universe at close to that intelligence level, it’s an asset.

The excess intelligence will be used to answer The Last Question.

It’s sort of difficult to imagine what ‘more intelligence’ really might mean, using only our current level of intelligence to try to imagine it. It might not be as simple a thing as being able to think faster or do the same kinds of calculations as we can mentally do, but at a more complex or difficult level - there could be entirely different ways of thinking - for example there are some problems that for us, require even the smartest of us to work through a set of sequential steps to get to the answer, whereas perhaps there is some mode of intelligence that works so that the whole thing is (or maybe just seems to the thinker like) a single, simple operation - either because the brain/mind of that thinker happens to incorporate something that instinctively and algorithmically solves a thing that we have to do longhand, or because the structure of that mind or perceptual system is just different and better.

Whether or not that sort of thing is likely to emerge from our current trajectory in AI, I wouldn’t try to speculate, but the answer to the raw question of ‘does more intelligence matter’ is probably yes.

Good point. I was thinking of one universal intelligence vs the natural laws of physics. But if there is some kind of natural selection rules applying, the more intelligent being will almost always win.

There are two views about this. You might call them the Niven view and the Persig view.

Niven view: something like a protector is so intelligent that they immediately see the (only? optimal) answer to a problem.

Pirsig view: as a mind becomes more intelligent, it can see infinitely more possible answers to a problem.

What comparable parameters of intelligence shall we limit?

Are the parameters of human intelligence and machine ‘intelligence’ comparable. Horsepower and steam engine power are not related. It’s just that they can both, in a narrow instance, do the same thing - move a mass a linear distance.

I suspect you are just being glib, but this is precisely the sort of question that the OP is taling about. Regardless of how highly intelligent the AI is, it will not be able to change the second law of thermodynamics. Nor is it going to allow us to travel faster than C. We don’t know everything about physics but we know enough to say that those are almost certainly hard limits that new discoveries aren’t going to change.

There are other related hard limits. Since our current solar cells are generally 15-25% efficient, we are never going to make them more than 4-6 times more efficient not matter how smart we are.

Are you saying the digits of pi are like intelligence? Are you saying that it takes more intelligence to calculate 300 trillion digits of pi than it it does to calculate 15 or 60 digits of pi?

Maybe if some entity is smart enough to calculate 15 digits of pi it can calculate trillions. Or put another way, to be intelligent enough to understand the concept of pi one must be intelligent enough to calculate trillions of digits of pi.

I think we need to be more specific in our definition of intelligence.

What sorts of problems did you have in mind?

Simple problems would seem to have simple solutions, and more “information” is unlikely to help. At some point this probably applies to the vast majority of problems.

Howard Gardner, a Harvard Prof who felt IQ was unhelpful, in 1983 divided intelligence into linguistic, logical-mathematical, musical, spatial, kinesthetic, emotional (intrapersonal) and social (interpersonal) spheres (plus a less defined general or naturalist intelligence). Gardner felt these were independent. Not everyone accepts these definitions and many other softer and less rigourous terms have been used (often by the usual pundits… “executive managerial intelligence”, “SDMB intelligence”, yada yada).

The type of intelligence demonstrated thus far by AI is very limited. No doubt it will greatly improve. I think short shrift is often given to the human brain, however, which has mastered a million subtleties. Recently, a lousy cubic centimetre of brain was mapped - showing 150,000,000 synapses and 1.4 petabytes of data (see link). There are more than 3000 types of brain cells and much of how the brain works is still mostly an undiscovered country. It is not clear to me AI will be able to fully master every type of intelligence and so dwarf human function, despite hype and extravagant claims. No doubt for certain kinds of problems it will do better than for others, unless it goes sideways.

And if you don’t get started on applying something, all the theory is just that.

I think the problem in answering this question is that we do not have the intelligence to comprehend future intelligence capabilities.

“The laws of thermodynamics don’t change”. Well ok. We understand the universe. But I think there is still a lot to be learned.

In other words, we are too stupid to understand/comprehend the universe.

Mostly glib and it’s a very apropos story, but really if entropy is solved then all (known) bets are off.

And then there’s Goedel’s incompleteness theorem which posits that complete understanding of the universe is impossible anyway.

Thinking about it more, im not too sure.

Like i said, the universe is finite in complexity. Lets say you had von Neumann and a McDonald’s worker competing to see who could tie their shoes, walk in a straight line, or read a sentence from a childrens book.

Von Neumanns advanced intelligence wont make a difference because both characters have the minimum intelligence necessary to perform the task.

My point is there is probably a minimum intelligence necessary to understand physics, chemistry, math and engineering that is, for practical purposes, perfect and on a universal scale.

Such an intelligence would be vasty beyond us, but any intelligence beyond that would be like calculating pi past 60 digits. A machine that can calculate pi to 200 million digits has no practical advantages to one that can calculate it to 10,000 digits.

That’s because it’s still you, with your meager human intelligence, setting the tasks. Make two von Neumann brains, and they’ll be able to set tasks for each other that are wholly beyond human comprehension. The first of those tasks being, understanding each other. Is that other brain trying to deceive me? Or maybe that’s just what it wants me to think? Can I deceive it back?

I mean, granted, even this would eventually reach an ultimate upper bound… but that upper bound is precisely the level of intelligence that’s possible in the Universe.

“Finite” isn’t the same as “practically achievable”. Given how fast the possible ways of structuring matter grows, I think it’s quite possible that a computer capable of “solving” the universe would have to be bigger than the universe itself is.

This represents a narrow view of intelligence, based on a limited concept of “understanding.” It’s one thing to calculate the solution to a problem; it’s another thing to recognize all the obstacles in the way of achieving that solution and how to deal with them. That’s because “intelligence” is vastly more complex than just being good at the hard sciences.

To illustrate, just consider some of the knotty, thorny problems that continue to confound humanity. Let’s take persistent criminality. I’m choosing this deliberately, because the AGI fantasists like to dream about eliminate politicians and turn social management over to objective, analytical computer systems. So let’s consider that.

Some societies are punitive: if someone gets out of line, drop the hammer on them, and hope their example disincentivizes other would-be criminals. Other societies are rehabilitative: first identify why the person did wrong, and primarily address that.

Most societies blend the two approaches to varying degrees. Decades of study supports this: it’s very, very expensive to keep lots of people in prison, and it does very little to reduce recidivism or crime in general. Social support, safety nets, and early intervention programs are more successful and are more cost effective.

But some societies resist this “obvious” solution because they have punitive thinking baked in. An advanced AI that presents a plan — “just do this” — will be utterly stymied if the society it serves says “yeah, no thanks” and the AI doesn’t have a deep reserve of further analytical tools to know how to manipulate people into accepting the “right” solution.

Which of course is based on a couple of assumptions. First, it assumes the AI does serve humanity, in the sense of being subordinate to it, and doesn’t have the authority or practical ability to simply enforce its identified solution. If the AI says, “tear down 75% of your prisons and terminate private for-profit incarceration and redirect the savings into social programs,” what’s to stop humans from pulling the plug?

Beyond that, this discussion presupposes that a gnarly problem like this has “a solution” in the first place. The reality probably is, mitigation of crime isn’t a problem so much as it is a system with all sorts of feedback loops that must be identified and managed and constantly tweaked. Pulling string A could have positive effects in strings B through K, but deleterious effects in strings L through Q. Are those acceptable? Do they require further intervention in other ways?

What if the AI decides the problem lies in how the laws are written? Do you allow the crime-mitigation AI to legislate, on top of managing trials and sentencing and imprisonment? And what additional political intelligence must the AI have to make that successful?

All I’m saying is, if you want to ask, how smart does the AI have to be to answer all the questions, you first need to consider which questions you’re asking. The framing in the OP seems to assume intelligence as a linear phenomenon, and it’s just not. True general intelligence is multidimensional and multidisciplinary to an extent I don’t think we’ve even really begun to grapple with.

Well written post, but I wanted to tag this part in particular for response.

This is why I consider predictions such as AGI in 2029 to be incredibly optimistic. I think AGI is four years away the same way that fusion energy is four years away. I can conceive it, but probably won’t live to see it.

Worse, I think. I least we understand exactly how fusion works the difficultly is designing a machine that does it in a way that generates an economically useful excess of energy. Heck, we know exactly how to build a powerful fusion generator; it’s just that building a star is far beyond us and we’re trying to create a much tinier alternative method.

But intelligence? We don’t have nearly that good an understanding of it. We don’t even know the exact goal we are going for, so predicting when we’ll reach it is absurd.

A perhaps non-trivial example poorly stated - when looking at a group of similar objects not organized in a pattern, at what point to you have to COUNT them, instead of ‘just knowing’ that there are five, or say six? [ When I had six cats, I would just know they’d all shown up for dinner - now that there are seven, I need to mentally count them…every time.] Machine vision can almost certainly be programmed to count much faster than we can, but I believe neural networks can be devised to ‘just know’ for much higher quantities.