H+1 with MOE +/- 3 is in the range of the aggregate. Throw it in the mix. The race per polling is close. Has been close. Aggregate H+2.6. Polling results are unlikely to move beyond toss up range and likely Harris will mostly stay give or take around here with statistical noise along the way.
Being nervous is rational right now but not more or less than it has been.
But seriously – and succinctly: No one or two or three polls is the Magic Oracle that Tells All. One rough poll doesn’t erase ten good ones. And it’s not been an even number of good polls/bad polls for Harris, most especially at the national level. The good polls are far outnumbering the bad polls.
Further, CNN and NYT/Siena’s 2024 polling oversample conservative voters in an attempt to accurately account for partisan enthusiasm. Missing low on Trump’s popular-vote percentages in 2016-2020 looms large in some of these poll-sampling decisions.
Lastly, for the house’s consideration, from the Comments on the Substack link above:
Simon Rosenberg - “The NYT can throw a 10 pt swing towards Trump in AZ while every other poll on the planet is showing movement towards Harris and somehow that is OK. It’s not ok. It’s impossible, reckless, ridiculous and the poll never should have been released.”
Travis: “I’m just as much a vibes guy as a data guy so I want to point out something that interested me, which is the breakdown of online polls vs live calls. The NYT/Siena is exclusively live call, as is Quinnipiac. CNN is a combo of live and online. At this point, I feel like some of the fuzz in the national polls we’re seeing is about methodology - I just don’t trust the consistency of live call polling because I never pick up the phone myself unless its ID’D. I think this could be a reason why we’re seeing those crazy swings in NYT polls and to me that bodes well for our side and our consistent edge in the other polls.” Link below to the relevant post:
Is anyone familiar with online polls and can speak to how accurate they might be? I’ve always been dubious about online polls since it seems like it would be so easy to spoof or flood them. How do they target people to take them? Is it through random popups on websites? I’d be dubious towards any kind of email, text or popup asking me to take a political survey for the same reason I wouldn’t answer them over the phone.
Generally, online respondents actively opt in to take the surveys. Definitely not recruited via random pop-ups and such. If you Google “how to join an online panel”, you will find a plethora of online-panel generation companies looking for you.
Probability-based panel: This refers to a national survey panel recruited using random sampling from a database that includes most people in the population. Today, most such panels in the United States recruit by drawing random samples of residential addresses or telephone numbers. Typically, data collection with these panels is done online. However, some of these panels interview a small fraction of respondents (usually about 5% or fewer) using an offline mode such as live telephone. These panels are “probability-based” because the chance that each address or phone number is selected is known. However, the chance that each selected person will join the panel or take surveys after joining is not known.
Online opt-in sample: These samples are recruited using a variety of methods that are sometimes referred to as “convenience sampling.” Respondents are not selected randomly from the population but are recruited from a variety of online sources such as ads on social media or search engines, websites offering rewards in exchange for survey participation, or self-enrollment in an opt-in panel. Some opt-in samples are sourced from a panel (or multiple panels), while others rely on intercept techniques where respondents are invited to take a one-off survey.
With response rates for telephone polls decreasing dramatically as costs skyrocket, many organizations have decided that polling needs to move online, where contacting people is cheaper. The easiest and cheapest way to conduct online polling is to use a nonprobability-based online panel. Polls using this sampling approach are numerous and popular but risk bias as a result of surveying only that subset of the population who are online and oversampling those who are heavier internet users. These relatively inexpensive polls are based on large panels of respondents who agree to answer surveys, usually in return for small rewards like points that can be exchanged for gift certificates. Panelists can be recruited through email lists, online ads, or other methods. Samples for specific polls are often built using quotas for different demographic groups, and weighting is used to try to make samples representative of the target population.
Researchers who want to retain the advantages of probability-based sampling find a few online options. Online probability panels polls – which are newer, less common and more expensive than nonprobability online polls – use traditional probability-based samples, like ABS, to make the first contact with a respondent or household. Those people who do not have web access are provided such access. A large number of respondents are selected to be a part of the panel, and then random selections are conducted within the panel or subpopulations of the panel to be invited to answer particular surveys.
Seriously who knows what is the right sampling and the right correction? Once again with feeling, there is highly likely going to be some degree of systemic error but guessing which way and how much within that +/- 4 has little basis.
Thanks for the info. I could see how the probability-based panels could deliver reliable results. Especially if they already have a history of their survey answers.
Harris had a lead of six percentage points based on unrounded figures - which showed her with support from 46.61% of registered voters while Trump was backed by 40.48%, according to the three-day poll that closed on Monday. The Democrat’s lead was slightly higher than her five-point advantage over Trump in a Sept 11-12 Reuters/Ipsos poll.
Should also just go into the mix and not be believed by itself either.
Slightly ahead in swing states is still slightly ahead, as long as the polling is right. Being ahead 4 points is for the popular vote, and probably a bit less this time around.
A general reminder that there remains an awful lot of right-aligned polling in the polling averages, and folks should continue to be skeptical, given what happened in 2022, of any right-leaning poll or pollster. It’s why I am not a big fan of polling averages. They were gamed by a flood of right-wing polls in 2022, and there remains a lot of those pollsters in the averages today …
No state has seen more “red wave pollsters” in recent weeks than North Carolina. The majority of the recent, public, independent polls taken there show Harris tied or ahead. There have now been 7 - yes 7 - right wing aligned polls that have polled there in recent weeks and of course ALL OF THEM show Trump ahead. The polling average in North Carolina would favor Harris if we removed the flood of these right wing, narrative shaping polls … The game here obviously is to not let it appear that Mark Robinson’s meltdown is causing NC to slip away.
It appears that their goal right now is to create two basic narratives about the election - MT and OH (in the Senate - b) are slipping away from Dems and NC and PA are in play (in the presidential election - b) and Trump still has a shot at 270 (for if those states go he has no path). After NC, PA has seen the second most number of these red wave polls in recent weeks.
Yeah there’s the narrative that poll aggregation was forecasting a ‘22 Red Wave in a complete whiff. Thing is places like 538 were saying no such thing. They had Senate at barely favoring GOP, mean prediction 50.9 GOP. And the House 229 GOP, not very far off from the 49 and 222 it turned out to be, and hardly predicting a wave.
Silver used to “correct” polls for bias; does he, or 538, not do that anymore?
I don’t think it’s worth quibbling about whether or not those are Red Wave forecasts. But they both were forecasting more Republicans than actually became elected.
You mean … omigosh! Polls did not call every race right?!!!
It is not a quibble to dispute the false narrative of a predicted Red wave, especially when it is being used to imply that polls are predictably skewed in a direction.
Reality check as linked a bit above: polls almost always systemically err one way or the other, in Senate race 5.4%, and House races 6.1% on average. Both directions occur apparently randomly.
‘22 was actually a better performing year than usual.
The fact they shitcanned Roe changed things. It might also have a big effect this year. Florida is within the margin of error, and Abortion (and pot) is on the ballot.
Apparently those Female voters did not poll very much.
I think the disagreement here, on whether the polls did well in 2022, has to do with whether you measure by the percent difference of polled vs. actual, or by whether the pollsters called races correctly.
Personally, I think the only fair and reasonable measure is percent off. But I can see why someone else is just as sure that the only fair and reasonable measure is correctly calling races.
To put this concretely, if the final polling average says that John Tester is going to lose by 2 points, and he wins a recounted squeaker, I think that will have been a polling triumph (as well as being good for the country).
It’s not productive to understanding poll skews to argue about what to call the skew in 2022. It is useful to point out the problem with using that skew to say anything about the 2024 elections.
I think it’s useful to look at it in multiple ways. If different analysis approaches lead to the similar conclusions, then we can have greater confidence in each of them.
Well, it should be obvious that what RCP is doing and what 538 (and Nate Silver at his Substack) is doing are very different. Picking an arbitrary set of polls and calling them an “average” is fine, but it lends itself to being astroturfed by biased polls. Which is exactly what Simon Rosenberg is calling out.
But 538 and Silver both include all polls, and just weight them based on some standard of quality (and perhaps adjust them based on known house effects). This removes nearly everything that Rosenberg complains about, and in fact 538’s averages were pretty damn good in 2022.
Both of those aggregators are saying basically the same thing - this election is very close and either Trump or Harris could very easily win it purely based on the statistical errors in the polls (no underlying “pollster error” required).
And I’ll dispute that: “calling races right”
is a poor metric, even though it is what we care about.
If races individually are not very close then crappy polling aggregation can call most of them “right” in terms of who wins, even if they are off by 6%. If races are all very close then highly accurate polling aggregation, within 1 to 2% of results, can be “wrong” in large fraction. The latter is still unquestionably a better job done by polling.
Actually reading the bit I linked to about how relatively well the polls did in ‘22, what strikes me most was how the errors were not systemic, not in consistent directions. That seems atypical.
Right. DSeid and PhillyGuy rightly speak to an unresolvable tension: between what we wish polls could do (foretell who will win a race), and what they actually do (offer a current, necessarily imperfect sample of voter intentions).
This annoying discrepancy is minimized when polling highly favors one candidate, and it’s underscored when the race is close.
(I’m repeating what others have said, just in a slightly different way).