I’m doing a little survey among employees (about ten) in a department of a company. One of the questions lists a number of activities done by the employees and asks them to specify (1) how many percent of their working time they actually spend on that activity and (2) how many percent they think they should ideally spend on that activity.
I would like to convert (or have Excel convert) the returns into a nice little handy figure that tells me at a glance if overall, the department is close to the perceived ideal state or not. What would be a meaningful way of doing that? I was thinking of simply computing the average difference between ideal and actual state, with each activity by each employee being one item in that data set. An alternative would be a more elaborate correlation between ideal and actual state, such as a Person r, but I’m not too sure if that would really be meaningful.
In all statistics the first part of performing a useful analysis is to figure out what question you really want to answer, and you are going to do with that information. Suppose your run your statistics and you get the answer 0.762. Is this a good number or a bad number? What does this tell you about your department? Without some form of context it doesn’t tell us a lot.
If you are comparing departments then getting a summary statistic for each department will allow you to do so. If there is only one department being analyzed then you may want separate statistics for each task so as to determine where exactly the time is being ill spent. If half the employees are saying they spend too much time on task A and half say they spend too little, does that mean that you are probably weighting that task correctly, or does it indicate a problem. Depending on your answer different analyses may be appropriate.
That said, I good place to start is the RMS error. Which is defined as the square root of the average of squared difference. So basically take the used value of each task, subtract the desired value. Square that number, and do the same for each employee/task combination. Average these numbers together and take the square-root of the result. This will measure how far off the two values are on average.
RMS is also a good use of the array formula in Excel - instead of setting up dummy columns for all of the squared values of the differences, and then averageing them and taking the square root, you can do it all in one cell.
Set up 2 columns - one with the actual percentages and another with the ‘desired.’ If you were to use column A or 1 and column B for another, and using 20 rows of data, you would type in this formula:
=SQRT(AVERAGE((A1:A20-B1:B20)^2))
and then press ctrl-shift-enter
when you hover over the cell, the formula will appear as {=SQRT(AVERAGE((A1:A20-B1:B20)^2))}
If you don’t see the brackets, then it’s not an array formula, and it won’t work as intended
Well, doing a fancy statistical test with n=10 is kind of like <insert folksy metaphor for pointless hard work>.
And even with only ten responders, you’re going to have trouble making the reponses equivalent from everyone (for instance, are you going to force everyones actual and desired activities to add up to 100%? If some do an some don’t how are you going to handle it?).
Since the data is small and messy, I’d just do a simple average over each activity (“The department estimates they spend 43% of their time filling out TPS reports, but think ideally they would spend 12% of their time doing so.”) That’s as much analysis as you need to (try to) convince the boss to streamline TPS reporting, which is I think all you can do with this data.
I think that’s what I do, for precisely the reason that n=6 (in fact, not everyone responded) is indeed too small to make fancy stuff meaningful. But thanks everyone for the input!