 # Calling all (any?) statistic geek(s)

A quickie I hope:

Explain in terms I can understand the difference between and the uses of the ‘t’ and the 'z’tests.

(I can add, subtract, multiply and, divide. Algebra makes me sweat. Calculus makes me puke. Trig. induces a coma. IOW, a math retard.)

All responses, even ones that berate me, are appreciated.

Hmm, I’m surprised no one’s answered yet. It’s been a while since I had to learn this stuff, but here goes…

z is the ideal; t is the real.

OK. If you took an infinite number of fair coins and flipped them an infinite number of times, counted the number of times they each landed heads and plotted it on a graph, you’d get a very sore thumb. You’d also get the very famous bell-shaped curve.

You would find that more coins landed heads 50% of the time than any other single result. You’d find the number of coins giving you other results diminshed as you went further and further from the mean in either direction. You would find that 95% of all coins would give you a result within 1.96 standard deviations of the mean – the z-score.

(If you need a definition of standard deviation, let me know.)

You would also find that 5% of these perfectly fair coins would give you odd-ball results that are way off the mean. Nothing wrong with the coins; it’s just the way probability works. Odds of “a million-to-one” doesn’t mean it won’t happen or can’t happen; it means it will happen, roughly once in every million tries. Could be the first try, could be the millionth try. No way of knowing.

Most people, for most purposes, are willing to be 95% certain. Let’s say I take a coin and flip it and get a wacky result – a ton of “heads” – that corresponds to a z-score of 1.97. I know that there is slightly less than a 5% probability that this coin is perfectly normal in every way; it just happened to give me a wierd result by chance. But that means there is a better than 95% probability that this did not happen by chance, that there is something up with this coin. So I have 95% confidence that I can discard that result as being non-random.

Now, of course, you’ve got better things to do with your life than flip coins an inifinite number of times. So you decide to flip each one a smaller, set number – say, ten times. And you flip a coin 10 times and it lands heads each time. The odds against this are 1,024:1 – pretty long odds.

BUT, a little voice in the back of your mind is bugging you. Isn’t it possible that this is a perfectly normal coin that, just by chance, is giving you a run of heads? Isn’t it possible that if you flipped that same coin another ten times, or twenty, or 100, that things would even out?

Of course it is. But you’ve got a lot of coins to flip. You can’t spend any more time on just one. You have to make a decision? What are you going to do?

You turn to t-scores. Basically, the t-score acknowledges that when you settle for less than a perfect test, you’ve got to increase your margin of error. You’ve got to look a little more charitably on some of those borderline cases.

What t-scores do, essentially, is push the threshold out a little further. The 95% confidence level becomes a slightly larger number of standard deviations. I don’t have my stats book in front of me so I don’t remember exactly what the t-score is for 95% confidence with an n of 10, but I think it’s somewhere between 2.50 and 3.00. The higher your n (number of flips), the closer your t-score gets to the “ideal” z-score. At n=30, they are close enough for most studies. At n=300, they are virtually indistinguishable.

I hope that helps.

I am reading slowly and it makes sense. It continually amazes me that statitstics is a science and can be so damn concrete. Standing at a craps table the gut feeling is infinitely more real to my emotional side. Of course the intellect says that Mr. Gambino has the house… and… well an advantage over me. Oh, do I know that!

Thanks for the 'splanation.

The t-score is the preferred statistical model of beer drinkers worldwide.