If I understand the problem right, then standard deviation will give you an answer, but it won’t be what you want. Imagine:
Group decisions: 6 5 5 4 5 4 5 5 4 6 6 5
Individual decisions: 9 9 9 9 9 9 8 9 9 9 9
I think you would call the group decisions above less extreme than the individual decisions, but the SD is larger for the group decisions.
I don’t think you want an out-of-the-box algorithm. I think you just want to define what extreme means and ask what fraction of decisions made by groups/individuals meets that definition.
For example, if extreme means anything in the set [1,2,3,4,6,7,8,9], then my above data samples would show:
Group decisions: 6 out of 12 extreme = 50% (call this f[sub]G[/sub])
Individual decisions: 0 out of 11 extreme = 0% (call this f[sub]I[/sub])
The next step is the critical step: you must estimate the uncertainty on these fractions. Barring any number of complications which may or may not be present, the fractions f[sub]G[/sub] and f[sub]I[/sub] will follow a binomial distribution. Assuming you have lots of data, and your distributions are less, umm, weird than the ones I made up, you could use the data themselves to estimate the error.
If you have enough data to make your errors small enough, then you will now be able to address whether f[sub]G[/sub] is significantly greater than f[sub]I[/sub]. What might be a bit cleaner is to ask if the ratio r=f[sub]G[/sub]/f[sub]I[/sub] is inconsitent with 1 (and in which direction!) given the error on r (which comes from propagating the errors on f[sub]G[/sub] and f[sub]I[/sub].)
I’ve glossed over / left out some aspects which could be important, but this at least is (I think) closer to what you want (assuming I actually understand the problem in the first place. If I don’t, then my apologies for wasting everybody’s time .)
(And this just goes to show that actually getting a quantitative result from a set of data is way more involved than it might seem at first.)