Statistics query: relative size of variance

Is there some quick-and-dirty calculation or rule of thumb about the size of the Standard Deviation compared the the answer scale, and whether this tells you how much consensus exists(in the case of an attitude measure, for example)?

To clarify, let’s say you ask people to rate two movies from 1 to 10. One movie gets a mean rating of 5 with a SD of 3, while the other gets the same mean rating of 5 but the SD is 1. Obviously, the first movie is one that people tended to love or hate. The second movie elicited mainly neutral reactions. Lots of consensus on ho-hum movie #2, but little consensus on Movie #1.

Is there some accepted formula for saying, yeah, the SD is yay big on an answer scale that is x size, so we’re looking at disparate attitudes on this movie.

No clue, dear, but I’ll bump this from Page 2 for ya.

:slight_smile:

The chart your looking for ought to be in the back of any statistics text book. Check for “Critical Values of t.”

Off the top of my head (and IANAStatistician) you would want to divide the standard deviation by the range of values obtained to get a rough feel for what a good consensus would be. To get a percentage you could use
1-(SD/R)
and as the standard deviation approached zero (perfect consensus) the consensus would approach 100%.

Since the standard deviation (in a normal distribution) has some interesting implications you could use that formula to come up with some other ideas about consensus.

95% of the data is within two standard deviations from the mean. So you could use
1-(2SD/R) for another measure of consensus. I actually found this second formula online, so maybe my WAG is meaningful in some way in the world of statistics.

In a perfect world you’d also want to include the mode somehow, though I have no idea how even as a WAG. :wink:

I’ve obviously used a t-test to test hypotheses about differences between two means, but I’m unfamiliar with how it would help me to describe a single distribution (which is essentially what I’m getting at) or to compare several distributions.

Can you elaborate?

I haven’t seen any calculation or ratio-like rule of thumb for what you’ve described. However, I think anyone who’s at least marginally familiar with basic statistics would be able to tell what’s going on just by giving the two means and the two s.d.

Example A: Mean = 5 and s.d. = 3
Example B: Mean = 5 and s.d. = 1

I think your search for consensus is all right there in these statistics.

Now there is something called kurtosis which does in some way measure how dispersed the distribution is. Some stats programs have an option you can click on to run a kurtosis measure. In example A (which I think we’re assuming is a bimodal distribution?), this would be called a platykurtic curve meaning that there are “fewer items at the center and at the tails than the normal curve but has more items in the shoulders.”

Sokal, R. R. and F. J. Rohlf. 1995. Biometry (3rd. ed.). New York: W. H. Freeman & Co.

Cranky, if your data is ordinal, you’ll have to make an assumption that that the data are equally spaced to use the normal distribution. It’s easier to approximate interval-level data if you have many categories.

Assuming you do have many categories, then for a single sample, you might want to look at kurtosis. If the data is clumped around the mean, then kurtosis is positive. If it is spread out, then kurtosis is negative. For standard normal distributions, kurtosis = 3 (to give you a feel for it).

If you want to compare the variance of two groups (which one is more variable and is this difference significant?), then you can apply the F-test where the larger variance is divided by the smaller variance. Use the F-distribution to determine significance.

Again, the use of interval-level descriptors or tests of significance is valid only if you can assume equal spaing among your categories. I’ve seen the statement that it is appropriate for scales with 5 or more categories.

equal spaing = equal spacing. (And I see jharding got in with the kurtosis bit too! :slight_smile: )