After seeing yet another round of abortion threads, I thought it would be interesting to address this fundamental issue: What properties must an entity possess to make it morally wrong to kill it or wound it? The answer to this is relevant not only for the topic of abortion, but also for how we approach Animals Rights, Artificial Intelligence, and Extraterrestrial lifeforms. If we can come to a coherent consensus on the OP, we should be able to metaphorically kill four birds with one stone! 
In my view, whatever principles we agree upon to confer rights should be species neutral. I reject the simplistic answer that humans deserve rights simply by virtue of being human. Why do humans deserve rights? Whatever morally relevant properties a human can possess should theoretically belong to Artificial Intelligence or Extraterrestrials as well, or possibly other animals on planet earth.
First, your question presupposes that there are such things as entities that receive moral consideration. I say that to forestall people objecting that there’s no such thing as morality.
Second, I believe you’re suggesting that such moral consideration is objective, not subjective–that is, we’re not talking about something like whether pizza is tasty, we’re talking about something like whether pizza has calories. If that’s what you mean, you may want to emphasize that to prevent the meta-argument about whether objective morality exists.
Tom Reagan, probably the most coherent animal rights philosopher, suggests the following items (remembered from reading him in high school 20 years ago).
The entity should:
-Be capable of feeling both pain and pleasure.
-Have a sense of itself over time.
-Have desires and interests.
-Have things that are in its best interests.
I generally agree with these items, with a couple of caveats:
-You don’t have to currently be capable of any of these items if you have an already-established personality with established desires and interests. If I go into a deep coma in which I can’t feel pain or pleasure and currently have no interests, you’re still not allowed to murder me.
-Future attributes are meaningless. If a scientist is getting ready to program a computer with self-awareness, the capacity to feel pain and pleasure, etc., and if I turn the computer off before the program finishes compiling, I haven’t done anything wrong to the computer. (I may have done something wrong to the scientist, but that’s a different story).
-A cow’s desires re: life may be drastically different from our own. It may not comprehend its own mortality, much less fear it, and though it’ll try to avoid a death it knows about, it might (if it had the capacity to consider it) prefer having lived a long time and then dying to never having lived at all–its fate if meat farming had ended prior to its birth.
Your question seems to assume some binariness. There is not middle ground in your equation.
We can kill and eat beef but it might be reasonable to go through some extra expense to minimize unecessary suffering.
We can value human life from inception but we can weigh that value against the rights of the mother and draw a few dotted lines in the sand (effectively what Roe v Wade did).
I’m not a philosophy type of guy but John Rawls approach appeals to me a lot. It is a well crafted version of “do unto others”
My opinion only:
Self awareness and an ability to interact with the environment.
Rocks have neither. Plants have one (although with more accurate testing we may find self-awareness). Animals have both. So to me all animals deserve the same compassion I would extend to any human. Those animals that we purposefully slaughter should recieve as painless and stress-free death as possible.
I find these two sentences contradictory, unless you also purposefully slaughter humans.
They certainly are. I was speaking about animals for food. And it is a very difficult choice for me to eat animals knowing what I know now and my evolving beliefs. But I will likely continue to eat meat and just push the contradiction to the back of my mind. Humans are very good at that.
not by capacity to suffer but capacity to reason. capacity here has to be qualified to mean either/or present and future potential. no one will claim an infant child can already reason.
My opinion, slightly different:
Self-awareness and an ability to interact with other self-awareness entities.
Self-interest won’t allow me or anyone here to honestly be objective about this.
I think the right to be taken into moral consideration should be extended to any being capable of suffering and/or experiencing pleasure. Such beings have an interest, or at least an inclination, to avoid the former (unless they’re uncommonly masochistic) and obtain the latter, and these interests should always be considered. But contrary to what the OP implies, I don’t think being entitled to moral consideration equals having a right not to be killed. Subjecting an entity entitled to moral consideration to suffering, or even death, may, in my opinion, still be justified, but only if the consequence is a relatively greater pleasure, or prevents greater suffering, to one or several other such entities.
Accordingly, I think that morality is always a relative matter; whether someone can be killed is not only dependent on whether they have an interest in remaining alive, but also on how strong this interest is, and how interested others are in keeping that being alive, as compared to the amount and strength of the interests that will be served by its death.