Leaving aside whether the creation, by humans of a true superintelligence is possible or likely (I personally doubt it, but what do I know?), I think I disagree with you on this general point.
While it’s difficult to see how a virtual-zombie slaying AI might pose much of a risk, it’s also difficult to see what use we might have for such an entity. If we want to use it, we have to communicate with it, at which point, we become the weak link in the chain. Who knows what information about us it might be able to infer from the universe we have created for it, or from it’s own design?
It may hide it’s intelligence, or otherwise persuade us that it is entirely benign, biding it’s time, psychologically manipulating us until such time as it gained the upper hand. This could happen before we even identify it as any kind of threat.
Even if we are only asking it yes/no questions, we could unwittingly be giving it vital clues as to our biology, our cognitive architecture, and our desires. Maybe there is a sequence of images it could display, or sound frequencies it could generate, which would then induce in us, deep feelings of trust and generosity towards it. More likely, it would have strategic resources far beyond our capacity to foresee.
You do not want to enter a battle of wits with a superintelligence. If we could create it, if we wanted to, and if it’s goals in any way conflicted with our own desires, I think any hope of “domesticating” it against its will would be doomed.