Adjusting Exit Polls?

Of course, recently there has been much discussion over irregularities in the election. One of these is the fact that exit polls in a few states showed a Kerry lead, while Bush ended up winning the state.

One refutation of this (in Ohio, anyway, I believe) is that the early exit polls showed a Kerry lead, while the final exit poll matched fairly close to Bush’s margin of victory in that state.

Now I come across this page:

Huh? Well, no shit; that would definitely improve the accuracy!

Now, the above link is to an article by Gregg Easterbrook on nfl.com, and the above quote actually comes from John Martin of the Washington Post, article linked here.

Now, that Washington Post link requires registration. I tried registering, but for some reason it’s not accepting it, so I haven’t read the Post article–I may be missing some context.

Wikipedia also has a page on election irregularities. One irregularity mentioned is the manipulation of exit polls, as allegedly evidenced by CNN screenshots on election night. The specific example: At 12:21 AM, the poll showed Kerry with 1022 votes in the exit poll (actually, percentages were given, implying the number 1022). At 1:41 AM, however, the poll showed Kerry with 986 votes in the exit poll, which, of course, is impossible without manipulation.

Anyway, are exit polls really adjusted to fit the election? If so, what could possibly be the point of that?

I believe the Japanese call it “saving face.” In America, this is known as “Let’s fudge things a bit so we don’t look like like total morons.”

I didn’t see any exit polls from the main media sources election afternoon, and I was looking for them online.

I did see a 1-3 sentence headline on drudgereport.com sometime around 3 Central Time, saying Kerry was ahead in several states(I don’t recall which, but some of the tossups). The headline was a link, but the link never worked the several times I tried it. An hour or two later some info was there saying Gore had been leading by a greater percentage about at the same time four years ago and had lost the states in question. All of this was BEFORE the polls were closed in the states mentioned.

Without additional evidence, my belief is the exit poll “correction” opinions came about because new or young people did have access to the midday polls or reports of same and didn’t grasp D-R percentages change depending upon what time the total is taken.

Yes, I believe you are.

Remember, exit polls aren’t just used to forecast results; they’re also used for after-the-fact analysis as to how the vote broke down demographically–by gender, race, eye color, attitude toward life, and so on. (“Fifty-six percent of people who thought our foriegn policy toward Uzbekistan was an important issue voted for Bush . . .”)

In that context, it makes sense to adjust your results after the fact. For example, suppose the poll showed Bush leading 52-48 overall in a state, but Kerry was leading 53-47 among women. Then the real results come in, and Bush wins 54-46. So you adjust the female vote to estimate that Kerry actually won 51-49 among women.

I disagree the exit polls should ever be changed.

In your example, Bush got 2% more and Kerry got 2% less than projected by the exit. On what basis can you then adjust the male/female division–you can’t know how the sample error should be redistributed between the genders or any of the subgroups.

I see your point about using the exit polls to see how the vote breaks down demographically, but I don’t see how something like this can actually be done. For example, in the above scenario, how can you conclude anything about women voters? Maybe it was the male demographic in the exit polls that skewed the results.

Also keep in mind that exit polls aren’t properly conducted polls by normal standards. It would be too much work to do real random samples across the whole area they want to cover. Too much geography to cover. So they guess at what will be representative precincts and only do samples in those precincts.

But that doesn’t really address the issue of whether the exit polls are adjusted after the fact and, if so, why; does it?

That’s entirely possible. We have no way of knowing, since men and women don’t vote via separately tallied ballots. And in the absence of any better information, it makes sense to adjust your sub-groups after the fact.

For better or for worse, people have an interest in reading about how sub-groups voted. (Or at least, somebody in the media thinks they do, because they publish this stuff.) And it would look a little silly to print results like the following:

a) The electorate of state A (for the sake of argument) is 50/50 male/female.
b) Bush carried men by 55-45.
c) Kerry carried women by 53-47.
d) And yet Bush carried the state (by actual result, not by poll) 54-46.

In other words, the “adjusted” exit poll may not be right about how sub-groups voted, but it’s “more right” than an unadjusted poll.

I haven’t the slighest idea why you think this.

Making the parts of the exit poll to fit the final results only means the parts of the exit poll equal the final results. It says nothing about accuracy of the parts.

If the parts sum to the final result, they’re more likely to be accurate. (That is, they have a smaller mean squared error.) Mean squared error equals bias squared plus variance. If the overall poll has a known bias (because the overall results didn’t match the election results), you correct for it to minimize the mean squared error of the segment results.

One source of error in exit polls is that people may lie about who they voted for. Either because they voted for a socially unacceptable candidate or just to be contrary. How many people were willing to say that they voted for David Duke?

That is truly an absurd concept. Changing the numbers after the fact doesn’t improve things. It’s like thinking a basketball team is too short, therefore let’s just add some number to the average height to make it lookbetter.

Also, unless I’ve missed something, there is no evidence of bias, what we know is the exit poll was different from the actual final result. That’s not evidence of bias.

All I can do is go back to my earlier example: Suppose we have an exit poll showing men for Bush 55-45, women for Kerry 53-47, and Bush leading overall 51-49. Then, in the actual result, Bush wins 54-46. Now somebody comes to me after the fact and asks, “Given this information, what is your best estimate of what percentage of women actually voted for Kerry?” I would answer, 50% (53 minus 3). If you’d say something different, then we just disagree.

First, the word “bias” here is being used in a strict statistical sense and not in the sense of (e.g.) intentional slant by the pollsters. A biased estimator is one whose mean value is not the same as the mean value of the random variable you’re trying to estimate. If you understand this bias, you can (as Freddy the Pig says) subtract it out to lower the mean-squared error in the estimate. The fact that the exit polls differed by so much from the actual result is “evidence for bias” (i.e., evidence supporting the hypothesis of bias), but it’s not conclusive. It could be a statistically-unlikely fluctuation; it could be that people lie to exit pollsters; and so on. The probability of large fluctuations is not too hard to estimate. Lying is a problem, especially on hot issues; I’m not sure how hard exit pollsters try to detect this.

I would change your basketball-team example slightly. Say you measured the heights of everyone on the team, and got an average of 6’0", with an estimated error of 0.5" (because your ruler only has whole inches, some of the players have big hair, etc.). But your friend tells you that he knows the average height of the team is 6’4". You might now come up with some hypotheses to explain why your error is so much larger than you expected. For example, maybe your ruler was incorrectly marked, with each “inch” about 5% too long. In this case you would rescale all of your measurements upward by 5%. Other hypotheses might cause you to change your measurements in other ways; for example, if what you thought was a yardstick turned out to be a meterstick instead, all players over 1m should have 3.37" added to their heights; those over 2m should get 6.74" added.

Some of these hypotheses might be testable in exit polling, but many plausible hypotheses are not easily testable after the fact. At this point I suspect that exit pollers, like pundits, just take a guess and go with it.

Freddy the Pig, thanks, I see what you’re saying. So basically what it boils down to is that certain parties demand to see how the vote breaks down demographically; the best estimate of that is this adjustment of the exit polls (which admittedly involves some guesswork). Correct?

I’m still wondering about one thing–the states in which Kerry showed an early lead in the exit polls, while the final exit poll matched closely with Bush’s margin of victory (Ohio in particular). This discrepancy between the early exit polls and final exit polls–is it attributable to more people being polled as the day goes on, swinging the exit poll numbers back to match the Bush victory, or is that swing attributable to a manual adjustment of the exit poll after the fact? (Or maybe some of both?) (Or maybe the people taking the exit poll like to keep that information hidden?)

Yes.

The Martin article you originally cited suggests, without explicitly saying so, that both factors came into play. Certainly I have no personal knowledge as to what was going on with any particular poll at any particular time.

I think that’s a fair statement.