Is there some quick-and-dirty calculation or rule of thumb about the size of the Standard Deviation compared the the answer scale, and whether this tells you how much consensus exists(in the case of an attitude measure, for example)?
To clarify, let’s say you ask people to rate two movies from 1 to 10. One movie gets a mean rating of 5 with a SD of 3, while the other gets the same mean rating of 5 but the SD is 1. Obviously, the first movie is one that people tended to love or hate. The second movie elicited mainly neutral reactions. Lots of consensus on ho-hum movie #2, but little consensus on Movie #1.
Is there some accepted formula for saying, yeah, the SD is yay big on an answer scale that is x size, so we’re looking at disparate attitudes on this movie.