I have seen this issue (copying people) discussed in “The Metaphysics of Star Trek” in the context of the TNG episode where Riker was duplicated by the transporter. It also arises naturally when describing artificial intelligences or human intelligences uploaded into computer systems, where it might be possible to easily duplicate them.
I think the reason the sheer act of copying may seem paradoxical is that it’s easy to think of the continuity of consciousness across time as a basic metaphysical fact. This paradox can be resolved through reductionism: the feeling of continuity is created by dynamical facts about how memory and other mental functions work that give a smooth flow from one experience to the next. These processes might be perfectly preserved during a copy; or not, depending on the technology involved. Regardless, subjective continuity is something of a red herring. Do you cease to be yourself after being completely knocked out by a blow to the head?
The more interesting question, in my opinion, is one of probability. It’s clear enough that if you are duplicated, both of the future versions remain “you.” But, subjectively, what probability should you assign to the idea that “you” will end up as copy #1 vs. copy #2? Allow me to elucidate this with an example:
The scenario: you are offered the opportunity to participate in an experiment. Scientists will use a device to scan the exact state of your body and mind. In the process, your body must be destroyed–but it will be instantaneously recreated in exactly the same state. The scientists guarantee that this is absolutely certain to work, and you won’t even feel a thing! (This is to rule out the not-obviously-implausible argument that the original is the only real you.) The scientists intend to use this scan to create an exact copy of you as you existed at the moment you were scanned. They will perform numerous unpleasant tests on this copy before killing it. They are offering you an extremely large sum of money to agree to the procedure. You must also sign away, in advance, any right your copy might have to opt out of the deal.
The question: ignoring any moral concerns about accepting payment in return for the death of another human being, is it in your purely selfish self-interest to agree to this deal? Assume the money involved is not sufficient payment for you to accept a 50% chance of being tortured and killed.
Because, in this case, your original body is destroyed, there ought to be no metaphysical reason to favor one copy over the other. The obvious answer, it seems to me, is that you should regard yourself as equally likely to “end up” as either copy, whatever that means. Accepting this offer would then be a very bad idea. I think this is the intuition most people will develop, when confronted with this kind of scenario.
Unfortunately, I do not believe this intuitive answer is tenable. Imagine a modification: the scientists will first scan you (involving destruction and recreation) in return for the promise of some payment. Only then will the recreated copy of you be allowed to choose what is done with the scan.
First option: the scan will be destroyed. You’ll receive nothing. In fact, they’ll even bill you for their time! Perhaps you should have read the contract more carefully… 
Second option: the scan will be used to create a new copy in the same environment that you find yourself now. The scientists will now ask it the same question they are asking you in order to study how your responses compare. After answering, the copy will be killed. The scientists plan to repeat this process many times, regardless of what any future copies say. In return for your consent, as the first copy, you will be given some monumental reward.
There are several things to consider when choosing how to answer:[ul]
[li]If the recreated original version of you (copy #1) agrees to this procedure, then all future copies will not be able to tell that they are not, in fact, copy #1 until after they answer this question.[/li][li]If you are copy #1, then choosing to allow more copies to be made will not hurt you in any way in the future. From a purely selfish standpoint, there is an enormous upside and no downside.[/li][li]This position of ignorance, where you don’t know which copy you are right now, is comparable to your position prior to the scan, where you didn’t know which copy you would end up as.[/li][li]Assuming equal subjective probability of being any copy, then if copy #1 agrees, it will have been unlikely for you, pre-scan, to “end up” as copy #1 rather than one of the doomed later copies.[/li][li]This seems to run counter to normal causality: the probability of an event that has already happened is influenced by your future choices.[/li][li]Copy #1 is likely to choose how to answer based on your personal beliefs and values. If copy #1 agrees to allow more copies, then they will all have the same beliefs and values, so are likely to make the same choice.[/li][li]Thus, if you are inclined to agree, you are unlikely to be copy #1. Hence you will probably meet a bad end. The big reward for copy #1 is not enough to cancel out a much larger chance of being another copy and getting killed. Thus it seems like agreeing would be a bad idea.[/li][li]But if your natural reasoning process leads you to the conclusion that agreeing would be a bad idea, it is unlikely that there will be more than one copy. Hence you are probably copy #1. Thus it seems like agreeing would be a good idea.[/li][li]It seems like the safe decision is to say no, since that guarantees you won’t get killed–that is, if you are copy #1, in which case you wouldn’t be killed in any case, so have nothing to lose by agreeing. If you are a later copy, what you say changes nothing.[/li][li]Standard game theory dictates that you consider only cases in which your answer actually makes a difference, since if what you say changes nothing, it doesn’t matter. However, in this case that reasoning means you should consider only the possibility that you are the first copy, which means you get the worse outcome: you’re probably going to die.[/ul][/li]
Hopefully I haven’t made anyone’s head hurt too much trying to follow this… 
This seems to be a paradox. It will be impossible for the first copy to calculate the probability that he/she is the first copy in time to get any benefit from the decision, regardless of how much time is allowed for the calculation. This is not a logical problem if one regards this “subjective probability”–the likelihood you assign to ending up as any given version of yourself–as merely a human belief or preference with no objective foundation. Our beliefs often lead to paradoxes when confronted with unusual situations. However this scenario and many others I can come up with seem to suggest that there is some basic inconsistency in how we generally think of personal identity across time.
It’s getting late, so to wrap this up, it’s also interesting to consider e.g., the effect on probability of being a copy of making a massive number of copies which might be absolutely identical or might be different in miniscule, unimportant ways. Or imagine merging some of a large number of copies back together. If identity isn’t always an objective fact, can it ever be one at all?