So with advances in AI, people like Kurzweil have always predicted AI would reach AGI status around 2029. He predicted AI thousands of times smarter than humans by around 2045, and endless trillions of times smarter than humans by 2099.
However, the universe we live in is finite in its complexity. Eventually you reach a point where intelligence is enough to solve pretty much everything, and then other bottlenecks that can’t be overcome like the laws of physics come into play.
Take horsepower as a comparison. A horsepower is roughly the sustained power a horse can put out.
If you need a lawnmower, you can get by with a 10 HP lawn mower. For most residential homes, there is no real need for a lawn mower with more than 30-40 HP.
For a car, generally 200-300 HP is sufficient. Cars only need to be able to maintain sustained speeds of 80-90 mph and still have enough capacity to pass other cars.
I think the largest amount of horsepower any machine ever built has ever had is about 110,000 horsepower to move a large cargo ship.
You can make a lawnmower with 200 HP, or a car with 2,000 HP, but when you consider that those machines exist to accomplish a goal, anything above a certain HP just becomes meaningless.
It would be like saying you need John Von Neumann or Paul Erdos to understand pre-school level math. The extra intelligence they have isn’t necessary because the problems aren’t complex enough to require the extra intellect.
In engineering, I don’t think anything NASA has ever done has required more than 15 digits of pi. Calculating the size of the observable universe down to the planck length would take about ~60 digits of pi. But we can calculate pi to 300 trillion digits, despite the fact that in the observable universe there would never be anything that would ever require more than 60 digits.
I feel like intelligence is the same way. Maybe as a scale, a squirrel is a 2, a dog is a 3, a chimp is a 4 and a dumb human is a 6. A smart human is a 7 and a genius human is a 9. Well and good. But problems are finite in complexity.
With medicine, there are about 60,000 known conditions. I’m sure we will discover more, but human biology is finite in its complexity. Maybe an AI doctor that is a 25 on the scale I listed above is about all you need to understand biology. You could build an AI doctor that was a 4,000 on the scale, but the complexity of human biology only requires a 25, considering that a human genius is about a 9 and a dog is about a 3.
If you need to engage in galactic scale engineering, a 25 won’t cut it. Galactic scale engineering requires far more intellect, the same way moving a cargo ship requires more HP than mowing your lawn. But again, its finite. The rules of physics and chemistry that control known reality are finite in complexity so wouldn’t that make the practical laws of engineering also finite?
Like with pi, you can calculate it to 300 trillion digits, but for practical purposes you’ll never need more than 15 digits, and there is literally nothing in the universe that could possibly require more than 60 digits.
Granted we could live in an infinite omniverse, and there is infinite complexity in it which would require infinite intelligence to comprehend and master.
But within the known universe we live in, the complexity is finite. The laws of physics and chemistry are finite for practical purposes. The matter and energy of the universe is finite.
When you manufacture bolts and screws, they don’t need to be accurate down to the nanometer. Even if we had the ability to manufacture them to that specification, pretty much anything more accurate than 0.1 mm becomes irrelevant for practical purposes. A screw may need to be 6mm vs 5mm, but it doesn’t matter if a screw is 6.005 mm vs 6.006 mm. Screws are only measured down to a tenth of a millimeter I think.
I feel like with the laws of physics and chemistry being finite in complexity, there being no practical purposes of math beyond a practical point (like pi, even though pi is infinite), and the universe having a finite level of matter and energy, that there is going to come a point where more intelligence doesn’t really solve anything. Even if we discover how to add more FLOPs to training models of AI, it won’t result in an AI that can do anything other than do the equivalent of calculate pi beyond 60 digits.
Granted all of this is a long ways away. But I can’t forsee any situation where intelligence, beyond a certain point, becomes relevant. When universe scale engineering projects become as difficult for an ASI as tying your shoes is difficult for a normal person, what does extra intelligence accomplish?
Also, FWIW, just because a machine intelligence is capable of universe scale engineering doesn’t mean it can actually do it. If the speed of light is immutable, then it doesn’t matter. Its more my point that theres no real benefit of intelligence beyond the intellect necessary to do things like that.