I also believe saying this is motivational. Too many people in Michigan thought Hillary was far enough ahead in 2016 so they didn’t need to vote. Pointing out that Harris is down in Michigan may light a fire under some asses.
I mean, I get so many texts from Dems and their campaign operations screaming how far behind they are in the polls and/or fundraising. Claiming to be behind is a strategy.
Just reporting that 538 is down to 55% now, of model runs that favor Harris. It seems the model needed about five days to fully digest that Times/Siena poll with low numbers in several battlegrounds; and it didn’t quite reach the 54% low I’d predicted, as other (lesser quality) polls have tended to include slightly better news.
Minutiae. Point is, we’re in a relatively stable period, and it’s so close that, if there’s a (by definition undetectable) polling error within historically normal range, a lot of things are possible come Election Day. The whiff of chaos emanating from the hurricane and from the Middle East feeds into the exaggerated narrative and “feelings” (meaning: fear of so many things, including – gasp – other languages) among the Trumpers. I honestly don’t envision polling getting any better for Harris over the next month (maybe she’ll start to “pull ahead” in one of the seven swing states, but even that is doubtful).
Scroll down to the graph under “How the popular vote translates into electoral votes.” This graph vividly makes your point – “2016” and “2020” are just two stars (literally) in an entire galaxy of possible outcomes.
Yet, there is structure to the galaxy. Some outcomes are more likely than others. But we can’t really test this quite the way we can in much physical and even social science, by holding a few hundred real-life outcomes in reserve and seeing if their average and spread resembles the model’s. We just don’t have enough real-life outcomes, and we likely never will.
Lying would risk a leak saying that Slotkin makes stuff up.
I think there really is an outlier private poll, in Michigan, saying that Harris is behind.
As for Slotkin’s motive, she said this in a fundraiser. The motive was to help contributors feel they are insiders by giving semi-secret campaign info.
Campaigns conduct many internal polls, and most of them stay that way - internal. They release the results publicly only rarely, and when they do, it’s with a specific goal and to support a specific narrative. That might be to help with fundraising (we’re behind!), create enthusiasm (we’re ahead!), or something else.
If you’re external to the campaign, these polls are completely worthless and should be ignored. Beyond the fact that the methodology might be geared toward getting a desired result, you also have no idea how many polls gave a different result.
On the latest 538 podcast, they make a good (if obvious) point: Don’t put too much faith in any polling in Georgia or North Carolina that are fielded within the period from about September 30 to October 9, due to hurricane recovery distractions and the challenge of reaching a representative sampling (including due to downed cell phone towers).
The issue highlighted here is using “weighted recall”:
While the differences between the two sets of polls are relatively small, they add up to two different stories of the election. The polls that don’t weight on past vote tend to show results that align more closely with the result of the 2022 midterm election than the last presidential election. They also show that Mr. Trump’s advantage in the Electoral College, with respect to the popular vote, has dwindled over the last four years. The polls weighted on past vote, on the other hand, show little more than a 2020 repeat.
In a way, the problem is reminiscent of something called “herding,” in which pollsters tweak their surveys to bring their results into alignment with the average of other polls. Like herding, the decision to weight on past vote is often a reflection of how pollsters feel about the quality of their underlying data. In this case, however, pollsters aren’t necessarily herding toward what other pollsters say. Instead, they’re essentially herding toward the result of the last presidential election.
To be clear, the difference are relatively small. But the explanation of the thought process and of the methods is interesting.
It is an interesting thought experiment. If I call 100,000 people and get 1,000 of them to answer my survey, and 450 of them claim they voted for Trump and 550 claim they voted for Biden, in a state that was 50%-48% (Penn.) in 2020, what am I to do? (Note, this is after weighting for region and demographics).
On the one hand, you can pretty easily argue something is wrong with my sample. I know 48% of the voters voted for Trump so why is only 45% of my sample admitting it?
One possibility is that they are lying (or misremembering) because they are no longer Trump supporters or they want to say they voted for the winner. This would obviously be good news for Harris and would mean I shouldn’t weight my sample based on recalled vote. It would mean that some of those “Biden-Trump” voters that show up in the crosstabs are really “Trump-Trump” voters.
But the other possibility is that reaching 2020 Trump voters is just harder than reaching 2020 Biden voters. That the missing 3% is still out there and there is nothing I can do to get them to answer the survey at the “correct” rate. If that’s the story then I better weight on recalled vote. Unless the difficulty in getting them to respond to surveys somehow indicates a meaningful reduction in their likelihood of actually voting in 2024 - but that certainly wasn’t the case in 2020.
I don’t think there is any way to know, a priori, which one of these hypotheses is correct. I’m not sure I even have an opinion of it, but if I had to bet I’d say it’s probably the second (Trump 2020 voters are just harder to get to answer surveys). I just don’t know any Trump 2020 voters that have stopped supporting him, even after everything that has happened. The few that did after J6 have come right back. If were a pollster and had to stake my professional reputation on accuracy in 2024 I would probably weight on recalled vote just to cover my ass and hedge against the same misses from 2020.
This keeps coming up and I’m not sure that it is clear.
Margin of Error refers to sampling error – error introduced by sampling a subset of the population. It’s the error you get because you only asked 1k people from a population of 1M. This is represented by a percent error and confidence interval.
There are many other sources of error in addition to the Margin of Error:
Framing Error: The sampled population does not reflect the target population
Non-Response Error: Some of the population can’t (or won’t) answer the poll.
Analysis Error: The pollster post-processes the results, including re-weighing the results based on what sub-groups actually responded.
Actual error can be quite a bit higher than the MoE. For a race that is this close it can’t predict the result; although it can be track overall changes in opinion.
I agree with the comments up-thread that there is simply not enough relevant data to accurately tune a model of the problem. It might be easier to model factors that correlate with election results, but you still have the problem of tying the two models together.
Well yes, for one sample of course. But apparently this is a repeated pattern; enough so that many (most?) pollsters are now re-weighting for it. If every single poll you take has 3% (for example) fewer respondents claiming to have voted for Trump than you know actually did in the voting population you are sampling, it’s probably worth considering why that is. I don’t think it’s just “margin of error” unless the error is random.
This happens with other factors too. I believe in most political polling samples you get more women than men responding, so pollsters reweight their sample to balance to what they expect demographics of the actual voting population to be. Same with age and racial characteristics. And I think some do it for party ID (but some don’t).
I’m not sure that treating “Trump voter” as a demographic characteristic makes sense (and I get why it could distort your results), but I can also understand the argument for it, if you think there is a correlation between “willingness to answer my survey” and “voted for Trump in 2020”. And it also makes a pretty good hedge for companies that missed badly in the Biden direction in 2020.
Perhaps your missing voters are just dead. Republican voters tend to be older than Democratic voters so between 2020 and 2024 more of them are likely to die off. Then there’s the COVID factor. From my understanding it hit Republicans harder than Democrats because Republicans were less likely to mask, social distance, and get vaccinated.
True. Did that affect this poll?
In any case, forget Florida for Harris. Not happening. Seriously — I’m not doom-and-gloomingly overreacting to a single poll. She isn’t going to win Florida — no way, nohow.
(Gift link to Nate Cohn’s analysis of a new Times/Siena poll in Florida).
Florida is going to be a black hole in a lot of other ways for a few weeks, but that’s a topic for another thread.
(In case anyone was unaware)
I wonder if it will mess up Florida enough to impact the election. If so, it could go either way, depending on exactly where it hits. But it probably helps Trump, because i suspect cities are more vulnerable to being shut down or otherwise messed up for weeks.
That’s fair. The null hypothesis is that there are fewer folks in the population we are sampling that recall voting for Trump in 2020. And some pollsters are accepting that and rolling with it. Others are rejecting it (likely burned by 2016 and 2020) and are weighting. We may, or may not, know which approach was right in a few weeks.