I think I understand the basic idea of radiometric dating: you take a sample and assay it, and compare the proportions of a radioactive isotope and its decay products, do some calculations based on the isotope’s half-life, and use that to determine the sample’s age. I know that’s super simplified, but I’m on the right track, right?
So what I’m wondering is how the ages of the isotopes themselves are taken into account. I know Carbon-14 is produced at a constant rate in the atmosphere - carbon dating is dependent on that fact. But what about something like uranium-lead dating? Wouldn’t the uranium in the sample, having been formed in a supernova, be millions or billions of years older than the Earth itself? Is it plausible that an asteroid formed with a deposit of uranium in the primordial cloud, floated around for a few billion years, collided with the just-forming Earth, and got incorporated into a rock formation, the uranium decaying to lead all that time? And if so, wouldn’t that skew the results to something far older?
Now, please don’t think I’m trying to poke holes in radiometric dating. I figure either someone far smarter than me thought of this decades ago and found a way to compensate for it, or I’m missing something that makes the problem irrelevant. I’m curious to know which, though.
Uranium-leaddating is usually done on zircons, which incorporate uranium into their crystal structure when forming, but not lead. Therefore any lead found in the zircon is a product of the decay of uranium, and the amount will give an estimate of how long it has been since the zircon formed.
I believe other methods similarly depend on the radioactive element being selectively incorporated when the rock crystalizes, but not the decay product. For example, in potassium-argon dating, potassium is found in many minerals but argon is a gas and can escape until the rock crystalizes and traps it. So any argon in a rock is a decay product of potassium.
I think elements tend to segregate by density while molten, too. So if you have a mixed sample of lead and uranium, and melt it, when it re-solidifies they’ll be at least partly separated.
This is my understanding and please feel free to correct it.
First, There are extremely few old rocks accessible to humans on earth. This is due to plate tectonics where the rocks on earth’s surface are “replaced” every so many million years.
Second, most rocks on earth are contaminated with lead due to the use of lead in gasoline.
So Patterson, who first established earth’s age, used meteorites extensively to date earth with the assumption (that later turned to be true) that earth had the same age as meteorites. The meteorites themselves were also contaminated by lead in earth’s atmosphere (also from gasoline) and Patterson spent about half a decade collecting samples and then another half of a decade building the world’s first sterile lab.
AFAIK, most of the uranium in any body of the solar system, whether Earth or an asteroid, was already present in the primordial cloud that formed the solar system, having formed in supernovae long before the solar system. So the uranium in an asteroid wouldn’t be preferentially older than that of Earth.
Individual atoms of uranium in cloud could have formed in different supernovae, and be billions of years different in age. But the probability that a particular atom of uranium will decay doesn’t change with age. An atom that’s 10 billion years old is no more likely to decay than one that was created 10 seconds ago. Which atom decays is completely random, which is what the concept of half-life is based on.
Yeah, I could have worded that particular question better. I had it in my mind that that particular deposit might be more likely to stay physically associated with the lead its decay produced, making it more likely to give an older reading than a sample that had been percolating around in the Earth for a while (though, on further reflection, that doesn’t seem terribly likely), but didn’t convey the idea in my post.
At any rate, the answers y’all’ve supplied sound plausible and satisfying to this layman. Thanks.
Uranium-thorium dating depends on the fact that thorium isn’t soluble in water. Organic materials acquire their uranium from water, so any thorium there will be the product of radioactive decay.
That’s not the case with zircon - we are only looking at U that *did *get moved.
Also, I think you have a misapprehension of how U behaves naturally - there aren’t chunks of primordial pure metallic U (+Pb) floating around, individual U atoms associate with different other atoms quite readily.
A couple of minor comments on the OP regarding C-14 specifically, and possibly not relevant to the longer time-span methods.
C-14 is produced fairly steadily but its not constant. The difference is enough to produce archaeologically-frustrating anomalies. After the start of nuclear testing it all went even more tits-up. That’s why C-14 dates use the date format 12345 BP [Before present] + 678 as a 1 sigma value. ‘Before Present’ is before 1950 as a constant referent.
These are radio-carbon years and in archaeology the term calibration is used when they are converted to calendrical years. Because C-14 production was not constant, calibration curves have been produced primarily by developing tree-ring sequences in various parts of the world going back thousands of years, and carbon-dating them to death. Broad location and material being tested are both relevant variables that influence results.
The thing we often forget is that the half-life [5730 years for C-14] means that for a date in the order of 57,000 years BP, which is a really, really interesting time for the first modern human dispersal into the big wide world, you are dealing with ten halvings of original C14, meaning a thousandth of the original quantum. Its short half-life is great but taps out quickly as well.