Mostly surveys are set up exactly that way, actually.
Generally, the people designing the surveys have a pretty solid idea of where their trouble spots are likely to be. And it can take months to complete a 10,000 person survey - the client is getting interim data during that time and can (and do) add questions if surprise trouble spots crop up. We program the survey to take that into account.
Hence, instead of having all wide open questions like “What do you think of Product X?”*, what you’re going to see is a whole lot of questions like “Please rate Feature X of Product X on a scale of one to seven, with one being horribad and seven being awesome.” And then responders who give an answer at or below some predetermined threshold of satisfaction (typically the mid-point, but not always) are shunted to a series of questions that explores their lack of satisfaction with Feature X in a lot more depth, while the responders who answer above the threshold are shunted off to whatever the next series of questions about Feature Y (or whatever comes next). Also, every single customer satisfaction survey my firm did included as the final question some iteration of “Is there anything else you’d like to see changed to improve Product X?” just in case there was an issue that didn’t get covered by the specific questioning.
If you’ve ever taken a market research survey phone call, you’ll note that they never tell you exactly how long it will take. This because it could take anywhere from 30 seconds to upwards of half an hour - depending on how you answer questions. The last survey I did for Microsoft had completion times of between 30 seconds and an hour - you were asked between one and 169 questions, depending on how your answers. The average was just over 10 minutes, but we had people up near the hour.
*Market research firms tend to steer customers away from having too many open-ended questions like this in the body of the survey. You honestly don’t get great data from them. They can be helpful for the “What exactly is it that you hate” or “What exactly is it that you love” about specific features, but generally speaking, the broader the question is, the lower the overall quality of the data is gonna be. Plus, they’re a lot more expensive for clients because they require a whole lot more manhours on the part of the data processing crew to convert them into useful data and on the part of the data gathering crew to elicuidate answers more helpful than “It just sucks, man! It’s shit!” (“In what way does it resemble excrement, sir?”)
Because the operating system remembers the process ID of the login process and will direct Ctrl-Alt-Del only to that process. It is very difficult for another program to insert itself as the login process. When you hit Ctrl-Alt-Del, you know that it is the real login screen.
So, why not assign this functionality to some other simpler key or key combination? They chose a key combination that they knew no program in the entire world was using (so they wouldn’t break them), because no program could use it.
It seems like a strange choice, but they spent a lot of time thinking about it and they made the right choice. The only other option they had was to make people buy new keyboards that had a new login key on them. That would have been a stupid decision.
A question that just popped into my mind. The questions did include the possibility of “I don’t know the feature” and “I don’t use that feature”, didn’t they? There’s many people whose use of, for example, Word, would allow them to get them the same results in WordPad; they use Word because that’s what they have. But they don’t even know most of the stuff Word does because they simply don’t need it.
I’ll take **Gus Gusterson’s **word for it on the process ID thing. I never really stopped to think about why it was CTRL-ALT-DEL; on my BBC Micro it was just “break”, which seemed to work fine (well enough that I never hit it by accident, at any rate); and on my Commodore 64 it was Shift + Break, I think.
Oh, well right on then.
I did take a market research survey once, and they told me exactly how long it would take. There were no open-ended questions, though- just those “rate on a 10-point scale” ones.
Sometimes they’re possible to call - usually the ones that clock in for under 5 minutes or so and don’t ask you do do anything other than rate things on a preset scale. Mostly they don’t tell you - because quite often, people just aren’t even “qualified” to take the survey at all. “Qualified” in this case meaning “a member of certain demographic groups”. For example, companies often want a specific distribution of responses over age groups, genders, household income, zip codes, education level, etc. Most often the specific distribution is “fairly even”, so as the survey gets closer to completion, categories start to fill up. I.e., “We have enough people over 65” or “We have enough women” or “We have enough people who went to college but not grad school”.
Most research companies try to avoid being specific about length of time though - both because it’s often difficult to be specific at all and because if you are specific, you’ll get a fairly high number of people who will go precisely the length of time you said and then hang up the phone. This leaves you with an incomplete survey, which is, for all practical purposes, useless. And it’s wasted everyone’s time, and the company’s money.
Nava, every survey I ever programmed did. It wasn’t always on the list of options the responder heard, but it was always an available option. I have to assume that other sensible programmers did the same thing. Our data gathering staff were all trained to just move right along if a responder didn’t have any experience with a feature. They always had options that allowed a skip of a question, which then required them to fill in a reason why the question was skipped. Mostly we tried to structure the things such that only people who admitted to some knowledge got asked detailed questions about any given features though. There were a lot of logic trains that ran like “Are you aware of Feature X?” (yes/no) and if yes “Do you use Feature X?” (yes/no) and if yes again “Detailed question(s) about Feature X” with a no answer to the first two questions shunting you off to the next logic train, that followed a similar pathway.
We did it that way both because you got better data and because it was vastly less irritating for both the respondent and the person gathering data to do it that way.