Pre-emptive attack on doomsday device: justified?

Cite?

That would be because it was a civilian power plant we sold and built them to our specs and under international supervision, not a “doomsday device”.

Most appropriate nickname (Kobal2= cobalt) ? :smiley:

If it can never be turned off or dismantled without destroying the human race, it hardly matters if the use of the device is imminent or not. Even if you trust the current boss not to hold everyone hostage to get what he wants, there’s no guarantee that his successors will be the same.

A weapon is only of use if people believe you are going to use it, or you actually are willing to use it. The U.S. arsenal has the ability to be used in a non-world ending manner, since there is no requirement that they use the entire lot if you just need to disable one country. The doomsday device can only be used to kill everyone, or convince people you’re willing to kill everyone.

By definition, if you hold the world hostage, you want something. Something temporal - be it money, land, power, it doesn’t really matter. The point is : you have little interest in blowing everything up. So the threat to use the device isn’t and cannot be credible. The whole idea behind Dr. Strangelove’s Doom Device is that it not only eliminates the question of intent to use from the equation, but it’s a double deterrent : the US can’t attack (because of the DD), and the Russians can’t attack because of their own DD which will be triggered by any US retaliation. It’s, in essence, a neutral and immanent threat - a threat to everyone, including the owner.

The only real concern is should a religious nutso be in charge of the device - his interest is not in this world, and he won’t hold anyone hostage. He’ll just detonate the thing, no questions asked. See y’all in Heaven. Of course, the problem is ostensibly similar : you can’t guarantee that there will be no fanatic in charge of the device in the future. But it’s a more remote possibility. While greed is almost inherent to the human condition, omnicidal tendencies are decidedly a minority among humans :wink:

If the threat cannot be credible, that defeats the whole purpose of building the doomsday device, as people would realise you won’t push the button, they have no reason to fear spreading the knowledge that you won’t push the button, and you’ve just put yourself on everyone’s shit list for no benefit. But given that they did create a doomsday device, they are clearly not operating under the assumption that the threat will not be credible.

You are contradicting yourself. If someone has no intention of blowing up the world, they would similarly have no intention of setting up an automatic blow-up-the-world machine. Knowingly building and activating a device that will do A if B happens requires exactly the same intent as waiting for B to happen and then doing A yourself.

I don’t think you could use nukes. Those weapons elicit such an unpredictable set of reactions that they would be too risky to deploy. Use of those weapons might, in the end, be just as bad for the world as the device itself. Especially depending on which nation I was heading and which nation was activating the device.

But I would certainly take the device out, and my actions between learning of the imminent activation and the order to destroy it would depend on how much time I had. If I don’t have enough time to even double check the accuracy of the intel, I wouldn’t bother. Any problems I create could possibly be fixed because I’m not going to use nukes. Using a nuke to destroy a weapon that never existed would be the end for everyone.

I’d like to pose a follow up question though, if I may. How sure would you have to be before you acted on the destruction of the device? Were this situation to play out like a blockbuster movie, and an aide comes up and says, “we’re x% sure about this being a doomsday device,” what would x have to be before you attacked?

For me, I think I would act on anything higher than 40%. As that number gets closer and closer to 100%, the severity of my preemptive action increases, too, I think.

Have you watched the movie ? There is no button to push. It’s entirely automated. That’s the point. It’s a shield, not a sword.

Of course they would - to make sure nobody gets any funny ideas. A nuclear arsenal is well and good, but it can be disabled, sabotaged, first-striked etc… Hell, half of the Cold War was about figuring a way to dodge or eliminate the other side’s missiles, at which point the war would have become very hot, very fast.

Nope. It eliminates intent entirely, thus it eliminates the idea that the hand over the button could hesitate (and who wouldn’t hesitate over destroying the entire world ?), it eliminates every scenario where there’s temporarily no hand above the button, etc… Basically, it vastly simplifies the problem, by eliminating any loophole, leverage or ruse the enemy could imagine or take advantage of.

No it doesn’t. Without that intent, the device never gets built, because - get this - if the creator does not want to blow up the world as a last resort, he won’t build an automatic blow-up-the-world-as-a-last-resort device! The intent to use the device is still vital, because that’s the only reason you’d build the bloody thing in the first place.

OK, I see what you mean now. But wanting (or rather, being prepared to, which is not exactly the same thing) to destroy the world as a last resort isn’t the same as being willing to destroy it at any resort. Different levels of crazy.

In theory, you build it to make sure it’s never used, same as the regular nuclear arsenal. Which, BTW, is also at the mercy of the “you can’t guarantee a crazy guy won’t get his finger on the button” argument. Maybe even more so, because there are a great many individual nukes (as opposed to a monolithic doomsday device).

Id like a cite for this.
I know of no conservative evangelical Christians who would believe this. I find it very hard to believe that Falwell would believe this.