Person A: I’m for X!
Person B: But what if X is going to destroy the world?
If person A doesn’t care, then they can be dismissed as a dangerous fanatic. If they agree X would need need to be banned or limited in that case, the B gets to say, well we agree then, our only difference is degree, and surely we can compromise on that.
It sounds like a version of the slippery slope argument.
I’m assuming A isn’t arguing for an end to the world and therefore they’re arguing for an amount of X that wouldn’t cause a disaster. B is countering by saying that a much larger amount of X than A is talking about would be a disaster. Getting from the amount A is talking about to the amount B is talking about requires a slippery slope.
I don’t see any special tactic here, just a normal debate.
You’re missing a third possibility, that they might disagree. I might feel that X is NOT going to destroy the world. And from that point, that’s what the debate will focus on: whether X will destroy the world or not. Sounds to me like a productive conversation, not an underhanded “tactic”.
It’s sort of the inverse of “don’t fight the hypothetical”. Person B is posing a hypothetical that, if true , would compromise Person A’s position. But the possibility is described as if it were not hypothetical whereas (as Keeve just pointed out) the relevant debate would involve fighting the hypothetical, i.e. questioning the legitimacy of posing that as if it were indeed known to be the case.
There’s no fallacy here. If you’re making a decision that results in catastrophic consequences, those consequences factor in the validity of the decision.
A: “I’m shutting off the containment grid because your permits are out of order.”
B: “Ghosts will flood Manhattan if you do.”
Assuming A and B are both correct, then it’s inarguably a bad decision. The only debate is whether B is accurate or not. If we agree A and B are both accurate, we might be getting toward excluded middle fallacy, but we also might just be having a normal argument over values judgments (“will anyone really notice if ghosts flood Manhattan? Perhaps they’re friendly ghosts?”).
Here is a proper representation of appeal to consequences with extinction-level stakes:
A: “By my calculation, Manhattan is experiencing an epic surge in PKE energy.”
B: “But that would mean the gates of hell will open and unleash demons on the city!”
If B is just making an exclamation, that just means he’s very distressed.
If B thinks his statement invalidates the logic of A, then he has committed an appeal to consequences fallacy.
I missed the 5th edit window here. I mean to say that if A is a choice, and we assume A and B are both correct, and B is inarguably unacceptable, then A is automatically unacceptable. Different conversation if B is demonstrably false, or there exist lesser degrees of B that might be acceptable.
If A is not a choice, but a true/false statement, we can’t deny A by saying its truth would literally destroy the earth. We have to evaluate A based on the supporting evidence and be prepared to phone our moms one last time.