Statistics question: how to tell when two numbers are different...

I have a statistics question that seems like it ought to be obvious, but I don’t know how to approach it.

Ok, so let’s say I have two numbers. The first number is 0.10 +/- 0.01, where the error is a 90% confidence interval. The second number is 0.14 +/- 0.02. I’m not entirely sure what the error on the second number actually is - it comes from an error on count-rate, so I’m going to guess it’s a standard deviation error.

Is there some test I can do to tell if these numbers are significantly different to 3 sigma? Thanks to anyone who can help!

They’re not. (Assuming normal distribution, etc., etc.,) if you changed the first interval from a 90% CI to a 99.7% CI (i.e., “3 sigmas”), it will already cross into the second number’s confidence interval. Overly simplified, but it’s not plausible that they’re significantly different.

Ok, two questions:
How do you change a 90% CI to a 99.7% CI?
Can I just use the CI error bar on the first value to determine if the two numbers are significantly different? Or do I need to factor in the other error bar too?

(I should have mentioned, this was just a hypothetical example. I have a list of numbers where the first column was calculated one way and the second column was calculated another way. They ought to have the same results, and I’m checking to make sure that there are no significant discrepancies.)

Sorry, disregard the second question. (missed the edit window)

It’s not clear if your two statistics 0.10 and 0.14 are sample means or proportions. If they’re means, it is possible to back into the standard errors for each statistic (but you will need to know the confidence level for the 0.14 interval) and with that data use a two-sample difference of means test to address your question. Hope you realize that a 3-sigma test is going to be a significance level that’s less than 0.3%. That’s quite low and my guess is that the conclusion will be no difference.

Hmm, I’m don’t think the two statistics (0.10 and 0.14) are means or proportions. The first statistic is a single measurement of an object using one technique and the second statistic is a single measurement of the same object using a different technique. I want to see if the techniques result in significantly different answers. If they’re different, that’s important for that particular object that I’m measuring, so I want to make sure that they really are different (hence the 3 sigma requirement).

I looked up the two-sample difference of means test and it looks like it’s intended for comparing two samples rather than two individual numbers with errors. I could change the sample size to 1, but I’m not sure if I’m allowed to do that…

Just ask if their difference is zero. With Gaussian (normal) errors, the 90% C.L. range is 1.64 times bigger than the standard error (68.3% C.L.). So:

a = 0.1 +/- (0.01/1.64) = 0.1 +/- 0.0061
b = 0.14 +/0 0.02 (already a standard error?)

The difference is:
D = a - b

The error on the difference can be obtained through the usual propagation of errors, which here gives:

(error(D))[sup]2[/sup] = (error(a))[sup]2[/sup] + (error(b))[sup]2[/sup]

So:

D = -0.04 +/- 0.021

The difference between these numbers is nonzero at the 1.9-sigma level. (Or, back in the language of the original problem, these numbers are different at the 1.9-sigma level.)

Ok, this is finally making sense to me. Thanks everyone for the help!