ETA: This was written off the top of my head, and I’m not sure I ended up where I thought I was going when I started. Tl;dr: It doesn’t matter whether there is continuity of consciousness is any particular sense, just make the decision leading to the best outcome.
LifeSucks, there are no convincing arguments here, because you’re dealing with unsolved and possibly unsolvable philosophical problems here. To some extent, it can be reduced to semantics: what does it mean to be “you.” From that perspective, the answer just depends on your definitions. But there is in fact a real problem at the heart of the issue, namely the relationship between subjectivity and objectivity, or as philosophers usually term it, the “mind–body problem.”
Objectively, it can make no difference if you are (instantly and painlessly) killed and replaced with an identical copy. It seems obvious that subjectively, it does matter, since if you were copied first and then someone wanted to kill you, you would have reason to object. Of course, objectively there is a sense in which it doesn’t “matter” if you are killed slowly and painfully, since the only way in which something can “matter” at all is subjectively. The only reason a universe in which you (or anyone) doesn’t suffer painfully and pointlessly can be said to “better” than on in which you don’t is that you and I and others subjectively prefer it that way. Concepts like “better” and “matters” are human (and possibly animal) concepts that we developed–they don’t have objective meaning. This means that they break down as concepts when they get applied to situations that they weren’t developed for. It only matters to you that you not be killed painfully because you have a preference, but what is your preference about being replaced painlessly by an identical copy and what does it mean if the reasons you have for preferring or not preferring it conflict with other reasons you have for preferring things in everyday life?
Practically, the best argument is that your brain doesn’t shut down in a way is philosophically significantly different than in sleep. If you don’t mind going to sleep, you shouldn’t mind getting anesthesia.
That should settle the matter for deciding on surgery, but it certainly doesn’t answer the philosophical problem: What if your brain did shut down in philosophically significant but otherwise unnoticeable (to you) manner? Alternatively, what is it about sleep (besides familiarity) that makes it so obviously benign? If you lose some sense of identity with your previous self every time you sleep, does it matter? To whom?
But just as there is no possible test for preservation of subjective identity before and after anesthesia, and therefore no possible test to determine the same thing about sleep, there is likewise no possible test to show that you preserve your identity moment to moment.
Again, there is a semantic or definitional aspect to this that obscures the real issue. If you think about it even briefly, it’s obvious that there is some sense in which the “you” of five minutes from now is and isn’t the same as the you of now or the you of five minutes ago. The reason that you don’t worry about losing your identity moment to moment isn’t because you have some subjective experience that convinces you that you stay “you” but that can’t be communicated objectively. It’s that you have a rough understanding of what it is that changes about “you” and what it is that is the same, and you subjectively decide that the sense in which you can be described as “the same” matters more to you.
IOW, your preferences about what happens five minutes from now line up with idea that some of the things that happen will be happening to “you” and so you decide to think and speak about things that way. But there’s nothing objectively correct about that. You could decide differently. Have you seen the Seinfeld bit about “Night Guy” and “Morning Guy”? There is nothing objectively incorrect about deciding that you don’t care what happens to “you” in the future because that’s not really the same person as you now. People who think like that don’t act in ways that lead to happiness for their future selves (or other’s future selves) and so both evolution and social pressure select for humans who do identify with their future selves, but that doesn’t mean it’s scientifically correct.
Maybe in this case, though, the evolutionary and social pressures are not helpful, however. You normally want to identify with your future self because it will lead to outcomes that almost everyone subjectively agrees are better and that will lead to survival of your genes (and at least some aspects of your current thoughts, memories, preferences, etc.: “You” in other words). But here the opposite is the case.
From an objective POV, there are two possible futures, one in which a person called LifeSucks (whom you may or may not consider “you” for various purposes) has been through surgery and, hopefully, had some medical malady corrected, and one in which a very similar person also called LifeSucks has not been through surgery and hasn’t had that malady corrected. As a utilitarian, I know which one of those possible future universes fits my preferences. Issues of personal identity, which usually lead us to make decisions leading preferential outcomes, are here a mere distraction.
Which possible future universe do YOU prefer? In neither one of them do you have some metaphysical connection with the future you (assuming you are a materialist, or rather that materialism is correct). Nothing about continuity of identity seems relevant to deciding between them. From a practical perspective the you that exists now is the one choosing which of those universes will exist, and which “LifeSucks” will exist in them. That seems to meet most of the criteria you use in everyday life for deciding whether a future being is “you” or not. Transporters and souls seem relevant, but they aren’t. Just pick a universe and cause it to happen.