The real danger of artificial general intelligence (AGI) isn’t that it is going to command killbots to round up humanity and use them as thermal batteries or for menial labor but that we will hand over our intellectual skill base to them the way we have and continue to automate physical labor, and then find that we’ve lost the collective capacity for deep intellectualism, self-governance, et cetera just as we’ve essentially lost basic skills like fire-making, sustenance food gathering, making shelter from basic material, and so forth. This won’t happen ‘tomorrow’ (i.e. in the next 15-25 years regardless of what ‘experts’ say) but when it does it will be so gradual, then sudden that there likely won’t even be more than token resistance to it.
As for “human-level” AGI, I think one of the big understated concerns is that while an AGI may become capable of doing many of the intellectual tasks of a human worker, it will not function in a human-like way that we can comprehend. An emergent machine cognition that is developing autonomy and self-awareness in some form may not even be evident. Again, I don’t think that such a system will by default deliberately genocide its makers, but it may make ‘rational’ decisions that are not in humanity’s best interest, and if we’ve collectively handed over control of our industry to it we may not really have the ability to reverse that path because even if we had some kind of “kill switch” we wouldn’t be able to live without it any more than an astronaut can survive long without regenerated air and water.
And I think Cecil needs to go back and read Bostrom because there are several misapprehensions about superintelligence as he defines it, not the least of which is that collective intelligence is a part of the evolution of human society and collective superintelligence is a nearly inevitable consequence of increasingly interconnected and data-rich post-industrial society. In certain ways, we’ve already achieved narrow areas of collective superintelligence.
Stranger