This is true. However my point is that it ‘seems’ like the complexity of a finite system is itself finite. Even if it isn’t finite from a theoretical perspective, from a practical and engineering perspective it may as well be. It would be far beyond anything we could understand.
Some people when they discuss life after advanced AI say they would get bored. I respond by saying that I don’t think that would be an issue, as boredom is a state within the human brain. The brain or whatever substrate we replace it with could easily be engineered by a more advanced intellect to not feel the sensation of boredom, just as I’m sure there are virtually infinite sensations an engineered brain could be engineered to have, but human brains supposedly only have about 5-7 core emotions and every other emotion and sensation is just a combination of these core emotions. But there are infinite sensations we can’t even fathom.
At the end of the day the matter, energy, laws of physics and chemistry are finite for practical purposes. To us, it is vastly beyond anything we can handle with our primate brains. But wouldn’t there be a point where the ways the matter and energy can only be practically arranged in a finite number of ways, but also predictive models would probably be able to model pretty much anything instead of actually building it.
Other way around. The complexity of a finite system absolutely is finite, but it can be, and very often is, so high that it might as well be infinite. Typically, unless there’s some factor keeping the complexity low (like deliberate engineering), the complexity of a system scales exponentially with the size of the system. Humans always underestimate exponentials.
Size does not always determine complexity - often it is finite. Large chaotic fractals can be generated by a few simple rules. These likely apply to things like branching blood vessels or neurons in the human body. Things can be very chaotic but the chaos is constrained by strange attractors. Exponential increase may not be widely understood, but neither are those things that require difficult models or involve math equations that are difficult or impossible to solve exactly.
Schlesinger, the comedian, said most people only know three things. If you define these broadly enough and define knowledge as knowing “way more than average”, I think she is right.
This is what I’m wondering. There is no way to know, but is it possible that universe scale engineering projects do not require more intelligence than galactic scale engineering projects? Humans can build a building and we can build an entire country. It doesn’t require more advanced intelligence to build an entire country than to build a single building.
Even if they do, a universe scale engineering project down to the picometer may require Y level of intelligence, but even so, what is Y*10,000 going to accomplish in that situation?
This is an answer to Wesley Clark. Sorry, can’t seem to fix it.
There’s no way anybody living on Earth today, including newborns, can answer that question, no matter how many times you ask it. Ask again on the Dope’s 10,000th cake day.
I think this is very likely so. To show how simple rules can result in incredible (but natural and highly applicable) examples of enormously complicated things, I would suggest reading Sapolsky’s excellent book Determined. The arguments are conceptually straightforward, but hard to duplicate here. It’s fascinating and although it minimizes the math, it presents its arguments well with much evidence and in a very accessible way. (Unusually, I disagree with only a few of his arguments.)
We can look at it from the opposite direction, as what is it possible to know?
Until an intelligence is capable of figuring out what dark matter is, or what consciousness is, or how to reverse ageing (or at least know what data is required to obtain to answer such questions) in a single second then there is always room for improvement.
Of course it might be that some tough questions are not answerable. But at that point we’re making a speculative claim about the universe itself, not (directly) the limits of intelligence.
Ok, so I’ve had 3 sleeps since I posted this, and in my general experience 1 sleep = 1 day. So have a handful of primates brainstorming this question in their free time led to a conclusion yet about what advantages a universal scale superintelligence could accomplish over a galactic scale superintelligence?
This question has to factor in the speed of light. If you want to create a galactic scale superintelligence you need to accept that the time it takes for such an entity to reach a conclusion will be on the order of 100,000 years. A universal scale intelligence that filled the Hubble Volume would take 46 billion years to reach a conclusion on any question, and much, much longer than that if you factor in expansion in the future. Can’t be done.
That depends on how it works. I’m reminded of the Old Mind from the Niven’s Draco’s Tavern story The Convergence of the Old Mind; an intelligence billions of years old distributed across the entire universe in the form of “smart dust”. It got around those problems of scale and lightspeed by giving up on centralization or real time processing, and operating as a massive parallel network perceiving and processing data relatively slowly. Periodically regions would collapse in a temporary “convergence” where processing time massively increased as it analyzed the data that region had been absorbing over the millennia. This was also the only time it could be talked to.
The Convergence of the Old Mind does cover this; a universal scale mind can do things that affect the whole universe. It is suspected that at some point the Old Mind greatly restricted itself compared to its past activities to give organic life the chance to evolve, in fact.
This sounds more like a community of different minds with no discrete boundaries rather than a single entity. An interesting idea (something like a colonial organism - a space jellyfish or mass of fungal hyphae) but there would be no real consistency possible across millions, or billions, of light years.
Worse than that; if there really were a universe-wide Old Mind it will be constantly be ripped apart by the expansion of the universe into discrete and non-communicating portions that can never synchronise with each other, and never affect each other in any way whatsoever.
The vast majority of the galaxies we can now see in our sky are forever out of reach and can never receive any communication from our location. Any universal entity that acts on a universal scale must either resign itself into a continual process of disintegration, or utilise some form of FTL communication that ignores relativity. Any universe-wide Old Mind will find itself gradually torn apart until it reaches the state of one proton per observable universe, unless those protons decay first.