Going back to AGI, I’ve been thinking about this some more in the context of reading about cybernetics and viable systems and I think one of the interesting things about AGI as an x-risk is that it not only involves making claims about the potential capabilities of computing technology, it is also implicitly makes some pretty astonishing claims about the world - by which I mean the technologico-economic-politico-sociological teeming mass of information and relationships and materiality in which we all live.
For AGI to do what both proponents and opponents claim for it - to be able to understand the world at a hitherto unimaginable level and also to manipulate it at a hitherto unimaginable level in order to transform it from one state into a specific other state - requires AFAICS that the world - again, literally everything about human society across the globe - have following characteristics:
That it be legible - that is, that everything about it can in theory be reduced to what is effectively a big data table of inputs such that some system can grasp the whole matter in detail and in its entirety.
That it be predictable - that is, that in theory future states of the world can be correctly identified in advance based solely on its current state..
That it be directable - that is, that given a specific desired output state, an algorithm can identify which changes to which specific inputs will result in the desired state.
That it be manipulable - that is, that there exist sufficient and sufficiently effective “levers” by which such an algorithmic transformation can be actively applied to the world as it is in a dependable and minimally error free fashion.
There is a principle in cybernetics called the law of requisite variety. Which states simply that your control system must be at least as complicated as the system you are trying to control. When the system is the entire teeming mass of human society that implies a ludicrously vast control system indeed. I think there is a genuine question about whether such a control system is even theoretically possible, as in mathematically, and if so, whether its theoretically possible within the constraints of the resources currently available to humanity, and if so, whether its practically possible to bring about.
I am extremely skeptical.
It could be said that is simply a definitional problem. That is, that an AGI is of necessity capable of all the above, because that’s what an AGI is. I don’t think that’s right - I think you could have an Artificial General Intelligence of greater capabilities than the most intelligent humans, but that still a) didn’t have access to such an input table, b) couldn’t construct one and c) couldn’t manipulate it if it did. But if that is our definition of an AGI, then for me that massively reduces the chances that one could ever exist.