Why do we have two “yardsticks” to describe interstellar distances?
Is there some generally recognized convention where one is chosen over the other, such as linear vs. angular measurements?
Or is it just some holdover from the scientific “culture” it grew from…like miles per hour, kilometers per hour, and knots?
Parsecs come directly from Earth-bound observational astronomy. As the Earth moves around the Sun, the stars seem to “wobble” a little bit because of parallax, and closer stars wobble more than farther away ones. A hypothetical star one parsec away would wobble by two arc-seconds (3600ths of a degree) and back over the course of a year.
So basically the parsec as a unit is tied very closely to one kind of experiment that is used to measure stellar distances.
At the time they started being used, there was a decent amount of uncertainty in the distance of the Earth’s orbit around the sun (and thus, what a parsec was equal to), and to a lesser extent, in the speed of light. So while they both in theory measured the same thing, in reality it was important to keep track of what distances were obtained using the solar-parallax method, and to keep in mind that they needed to be scaled to whatever the best value of an AU was.
Nowadays, they’re pretty much directly convertible, and no one who isn’t directly running a parallax observatory actually thinks of a parsec in terms of units of parallax, but the unit hangs around for historical reasons.
Yeah, all astronomical distance measurements are based on a ladder of measurements below them, with each rung being used to calibrate the next rung up. If any rung is off, then so will be every rung above it. And the bottom rung of that ladder is the AU, and then using parallax based on the AU to determine the distance to the nearest stars. Now, by now that bottom rung is pretty darned solid, but old habits die hard.
In scientific writing meant for other astronomers, they virtually never give light years. Light years are seen as something more intuitive for non-scientists to understand. If an object is close enough to be measured by parallax, they often just give the parallax measurement. For example, here is a list of nearby stars maintained by an astronomical team. It has neither light years nor parsecs. Column 7 has the parallax (the second number in that column is the error bar), so if you want to know the actual distance, you have to calculate it. (Unfortunately they haven’t updated that list in about 3 years. Hopefully they’ll do it next year.)
Similarly, for cosmologically distant objects, astronomical papers will cite an object’s redshift factor (z) rather than gigaparsecs or billions of light years. They leave those things for articles aimed at non-scientists.
I’d understood this as being due to the fact that that the translation between z and actual distance is not certain. In either case, when doing observational science, you try to report exactly what you observe, rather than what you infer from those observations, to avoid introducing additional uncertainty.
I think you’re confuse. The Kessel Run took a short time, so parsecs, like seconds, are obviously an indication of a short time taken. Nobody would use light years in that discussion, because we’re still talking years even if they are light ones.
Of course, in the expanded universe there is a whole explanation involving clusters of black holes and nav computers that make what Han said technically correct (the best kind of correct!).
I’d always been a bit confused by parsecs. A number of times I’ve been reading armchair science where an article tries to explain parsecs, but never as simply and clearly as those above.
FWIW as an astronomy student I heard parsecs (and kiloparsecs and megaparsecs) used as the favored distance unit more than any other, but AU used for close objects such as the solar system (including its outer reaches). On a cosmological scale, redshift (Z) was favored, though this was the mid and late 70s and the conversion between redshift and distance was a good deal more uncertain then.
Even if we knew the redshift-distance relation exactly, it’d still be a good idea to express cosmological distances as redshifts, because on that scale, there are multiple different ways to define distance, and stating the redshift removes the ambiguity.
Parallax Arc-second, I.e. the distance at which a star appears to shift (against the relatively unchanging much more distant background) by one second of an arc. based on an orbit diameter of 2AU, a star 3.26LY away would appear to shift that one second of arc, when observed from one extreme of earth’s orbit to the other.
Except that you’d add the ambiguity of mass. Doesn’t light technically red-shift due to gravity too?