Fact Check: C-14 Dating

Specifically I’m looking for a mathematical proof or an explaination of my professor’s lecture today in my gen ed Physics class. Anyone who wishes to argue whether its a valid dating method or not can go hijack someone else’s C-14 thread.

According to my physics professor today in class, radioactive matter decays probabistically (i.e. no direct causative factors are know, and thinks like temperature, pressure, and density don’t affect it). So far so good. He then said that the range within which the amount left after one half life will fall with certainty is the square root of the new amount. And because an equation is worth a thousand words:

C-14 t(1/2) = 5730 years

Today I have 100 atoms of C-14, so in 5730 years I will most likely have 50 atoms of C-14 left. Now acording to my professor, I will have:

Sqrt(50) = ~7
Therefore I will have 50 (+/-) 7 atoms, or 43-57 atoms left with certainty. Thats a range of (+/-) 14%.

The reason this is important is because, if true, one mole (14 grams) of C-14, after one half-life, would have a range of (+/-) 0.0000000000002% or 2x10^(-12)% the calculated range. Thus making radioactive dating a very reliable thing (assuming the rate of decay is fairly constant, which quantum theory seems to dictate that it will be).

So back to the question - Where does this Square root rule of thumb come from, and what is the statistical level of certainty it represents?

Good Lord… this might be one for Cecil Himself…

I’m not sure of the exact equations, but you are missing a lot of steps. The “sqrt” you are talking about is likely a statistical or standard deviation of some sort. In some instances, a good, first order approx of the std dev is sqrt(mean). Not sure how it would apply in this particular case, but that could be it. Also, you left a step or two out of your calculation for 1 mole. The variation (as a percent) should not depend on the starting amount. It looks like you are using the deviation derived from 100 atoms for the case when you have one mole. Can’t do that, wouldn’t be prudent!

Also, try General Questions forum for this type of problem. It has a specific answer (not debateable) and there are a lot of smart people over there who love this kind of stuff. It might be a good idea to do a bit of googling first. You’d be surprised what you can find on the web w/o having to rely on someone else to supply the answer.

That should be sqrt(mean). Ie, the standard deviation can be approximated by the sqare root of the mean.

As fate would have it, that term wound up at the end of the line and had an unintentional “carriage return” added.

This is where it comes from. If you do a series of measurements on a given material, each for a given time; and then plot the measured number of decays ( n ) against the probability of measuring that number of decays (always keeping the time constant) you’ll get what is approximately a Gaussian distribution.

For a Gaussian distribution the Standard Deviation is equal to the square root of the mean number of decays (< n >).

Your Prof has specified the special case where the time is exactly one half life. At this point the average number of decays will be equal to the average number of remaining atoms, so the Standard Deviation will equal the square root of the remaining C-14.

Moderator’s Note: Moving to General Questions.

For you readers at home, ‘Gaussian Distribution’ and ‘Gaussian Curve’ are the proper names of the Bell Curve. Also, if I remember my statistics properly, a range of one standard deviation above and below the mean encompasses roughly 68% of a Bell Curve (i.e. 68% of all tests of this little formula will fall within the expected range).

Good detective work on the approximation - that seems like exactly the sort of off-hand estimate that this professor would use.

Note that the two explanations are not equivalent, and I believe zigaretten is correct.

It’s the square root of n, the number of events, that affects the standard deviation. I remembered this from my physics classes, but the best explanation I could find online is here (pdf file.) Scroll down to “The square root law” to see this:

So, if you are counting something like particle decays which are binary values (either it decays or it doesn’t) the standard deviation of the sum of particles that have decayed is the square root of that sum.

You can see then that the SD is much smaller as a percentage of n for large n. If you count 100 decays, the standard deviation is 10. If you count a million, the standard deviation is 1000.

So, as your stores of [sup]14[/sup]C decay, the SD for a dating measurement will go up. For example, let’s say I have a million [sup]14[/sup]C atoms in my sample. After the half-life, I have half a million, give or take a thousand. Pretty accurate. After another half life, I’ve got a quarter million give or take 700. After another, I’ve got 125,000 give or take 500 - so while I’ve got 1/8 the number of atoms left, my SD has only dropped by half. So you can see that if I start with only 50 atoms, my accuracy is going to suck greatly.

(Note: The numbers I used in my example are just round and not exactly calculated - the idea is that whatever thing you can count, the SD for that thing is sqrt(n). So, ifyou are counting atoms, your SD is for atoms. If you are counting decays, your SD is for decays. I blurred the lines for the sake of an easy illustration…)