Can somebody explain this phenomenon to me? We’ve always heard of the infamous “Gallup Poll”, and for the past few years we’ve been hearing about the “Rasmussen” poll, and lately there have been several others that seem to have come out of nowhere. So what’s the deal with these things? Who runs them and why? We all know that polls can be manipulated to give the results the pollster wants. (Penn & Teller did a cool show about that.) So why are they so often quoted by news broadcasts as valid assesments of the public opinion?
I’ve never seen a Mr. Gallup, but I have seen this Rasmussen guy on TV. Just who the hell is he? Where did he come from? Why is he doing polls? It must cost him a great deal of money to conduct these polls. Is he doing this out of the kindness of his heart for the sake of humanity? Or is he making money from this endeavor? And how?
Rasmussen is Scott Rasmussen, one of the founders, along with his father, of ESPN. He’s president of a company called Rasmussen Reports, which is the company that does the polling. And of course he makes money on it. None of the polling groups do polls out of the kindness of their hearts. They’re hired by various organizations to do them.
The reason you’ve never seen Mr. Gallup is that he died in 1984. Before he died, however (in 1958, as a matter of fact), he founded the Gallup Organization, which, like Rasmussen Reports, is hired by various organizations to do polls.
George Gallup was the founder of the Gallup organization. He held a Ph.D. (in journalism, apparently) and taught journalism at several universities. He then entered the marketing field, and developed methods of population sampling for the purpose of measuring public opinion. The Gallup Organization does not perform polls sponsored by special interest groups or political parties.
Scott Rasmussen is the founder of Rasmussen Research (which he sold) and Rasmussen Reports (of which he is still president). He holds an MBA, is a co-founder of ESPN, and has other connections with broadcasting. Rasmussen polls are accused of swinging a bit conservative, but generally seem to be more accurate than other polls due to polling only “likely voters.” Rasmussen uses automated telephone polling instead of conventional operator-assisted polling, which makes the polls very fast and inexpensive. Rasmussen polls are commissioned by many organizations.
All polling organizations make money from commissioned polls, mostly for marketing purposes. The political polling can be seen as a marketing expense, as accurate publicly available polls enhance the organization’s reputation for polling of all kinds.
Okay, so Gallup and Rasmussen are actually marketing enterprizes that use public awareness to enhance their own marketability. Pretty slick. How come you never hear about their profit making endeavours? Full disclosure?
Is that necessary? Do people generally assume organizations do expensive, long-term things out of the goodness of their hearts?
The pollers are still corporations, just ones specialized in offering a service of information. The news industry does the same – makes money on public trust. “Full disclosure” in this case seems like something common sense should be able to provide.
ETA: Some news orgs do their own polls, too, with varying degrees of sophistication.
How does Gallup make a profit then? Do they just make up polls, and sell the data afterwards? Or do they only do polls for news media, which are assumed to be neutrals?
My annual work evaluation is done in part on the basis of a Gallup poll. If my co-workers and I fill out the questions right, we are deemed properly indoctrinated and therefore worthy of potential good evaluations and perhaps raises. The company pays for the Gallup organization to administer the poll.
Your two questions sort of answer each other. The results of a poll are a lot more trustworthy if the poll was conducted by an independent organization that knows (because that’s their business and they have expertise at it) how to do reliable statistical sampling and make valid assessments of public opinion, than if they were conducted by an organization without such expertise and/or with an axe to grind.
Most polling organizations are commissioned to do a poll. The small simple polls may be commissoned by a news organization to liven things up. Political parties and other groups (issue pressure groups, for example) may commission polls to help lobby for their viewpoint. If the polls are beneficial to their point of view, they may issue the results as a press release.
Often, news companies want to track the fortunes of various political groups. We see more of that in Canada, when the answer “which party will you vote for?” gives a big indication as to who will be the government if an election happens. You can be sure the political parties have their own in-depth polls to determine what turns off and turns on the voters to a particular point of view. Here in Canada, one of the major polling companies (or was it two of them?) got their start as consultants to a political party.
A good (honest, open) polling result also states who paid for the poll and more importantly, the exact wording of the question. Yes, polls, like other statistics, can be tortured to say whatever you want to hear.
There’s more to polling than just dialing at random. the key is to get a representative sample. A company that is consistently wrong will eventually find no more business. The famous “Dewey Defeats Truman” polling was allegedly biased by the fact that it was the first telephone poll; Truman’s main base was rural America where not as many people had telephones yet. The margin of error is determined by the size of the sample, using some obsure calculation I learned in stats class and have long since forgot. The bigger the sample, the more accurate, and the more expensive.
(There’s the classic Dilbert cartoon - “We conducted a poll about our product, calling a sample of people who still had land lines and listed numbers. 65% replied ‘Fiddlesticks!’ and 35% thought we were in the room with them and offered us hard candy.”)
One of the things about polls is that margin of error is very often a significant percentage of the difference of opinions. The specific way any question is worded will often give widely divergent results, as will the method of delivery of the poll, and the self selection of those polled. If you are new at the process, you are going to be pretty much in the dark about how your poll will turn out.
But, if you have been doing polls as a corporation for a couple of decades you have a data base to compare, and analyze. You can provide a poll for a customer that appears entirely objective and fair, with no obvious slant, and by adjusting the factors you have learned matter, pretty much have it come out however you want. If you are polling products for the market place, you want actual reliability of prediction, since repeat business will be your best bet as a company. If you are polling for social, political, or delusional customers accuracy and predictive reliability are less important, and the market is wider, and repeat business does not depend on prior accuracy.
While lots of folks have heard of Gallup, and a fair number of Rasmussen, there are hundreds of smaller companies in the business, and dozens of large corporate interests with their own polling divisions. That segment of the statistical industry has less vested interest in accuracy as you think of it, and more interest in having the poll come back to them as they predicted.
Tris
For many years CBS news had a polling unit but I believe they shut it down or at least merged it with another company. For a while they they did joint polls with the NY Times.
Most major polling outfits do market-research constantly – by that I mean they’re always polling someone about something. It’s really mundane: GM wants to know what you think of the Chevy Malibu, Wal-Mart wants to know the last time you recall hearing about them on the news. Like any market research firm, they aren’t going to directly divulge the names of their clients or the particular questions they ask on their behalf.
On the other hand, they’re pretty straightforward about their business mission.
The big consumer research organizations have huge databases of people divided into any demographic group you can think of, and, also importantly, know how to design a poll and the questions in it so that people are actually asked questions in a way that reflects the answerer’s opinion, not the way the question is phrased.
The political polling is mostly a sideline for them, but it gets them good publicity and it allows them to show off the accuracy of their results when pitching their bread and butter market research.
Not really. It was the Literary Digest Poll in 1936 that failed spectacularly, saying that Alf Landon was going to defeat FDR, when FDR took all but two states in one of the biggest landslides in election history.
By 1948, pollsters knew enough to avoid that sort of bias. The reason why the polling that year was wrong is unclear; I’ve heard reports that when people did exit polls, the numbers still showed Dewey winning. Evidently, a lot of people voted for Truman who wouldn’t admit to it.
Also “Dewey Defeats Truman” was a headline, not a poll. The Chicago Tribune printed it, but the paper went to bed before all the results were in. Their editors made assumptions about what would happen (IIRC, Truman was actually leading at the time, or at least closer than people thought). I also believe that the pollsters stopped polling quite some time before election day, so they didn’t note a last-minute surge toward Truman.
When I say “Dewey Defeats Truman” you know exactly what I’m refering to, right down to the photo, so the name worked.
I don’t know either but… I recall an article about polling and bias that mentioned the urban/rural split in the 1948 election and the fact that the polling indicated a Dewey win. They also mentioned that Truman IIRC did a lot of the back-of-a-train speechifying that gave him greater exposure in small rural towns and that the major polling firm at the time included a large amount telephone polling data for the first time. Even in 1948 private telephone ownership had a decided rich/urban tilt.
Part of it may have been wishful thinking by the powers that be. Another thing I recall reading was that the Republicans controlled congress and were so sure Dewey would win they budgeted a large amount for the inauguration celebrations. Truman made sure he took advantage of that.
Gallup does polling for news outlets. You’ll often see something mentioned as a “Gallup/Wall Street Journal” poll (or substitute some other outlet for the WSJ).
I know Rasmussen sells access to their crosstabs, allowing those interested (interest groups, policy makers) to find out how many of that 41% who approve of farm subsidy reform consider themselves to be Democrats or are between 35-60 years of age. I wouldn’t be surprised to learn that Gallup offers a similar service.
Aha…
Perhaps all these are repeating the same inaccurate meme that came from one source?
http://www.csudh.edu/dearhabermas/sampling01.htm
http://thetruefacts.blogharbor.com/blog/_archives/2009/5/31/4206181.html
Someone repeats the claim in the comments here:
This article suggests “quota sampling” as the error but does not mention the phone issue:
And this blames stopping polls too early and blames the 1936 error on phone polling
So perhaps sloppy writing confuses the two poll problems?
So it would be interesting to find an article that analyzes the issue in more depth without just repeating earlier, possibly true or false claims.
Harris Poll is another organisation which operates here. In recent years others have appeared, such as YouGov.
Some countries ban polling in an election campaign, or ban publication of the results in the last stages of the campaign.
I can share some insight into some of this. During the late '90s I worked, for a shortish time, at a survey research company. We didn’t do the actual calling, which was farmed out to a phone bank; we did the brainwork, which is survey design and results analysis. It was fairly interesting work (or would have been in a somewhat different office atmosphere).
We did two main types of survey. In one type, our company would be hired by an organization to design and conduct a poll for them. There was a fairly large magazine publishing company nearby (you’ve heard of some of their magazines), and we did monthly surveys for several of their products; we weren’t looking to skew the answers in any direction, but rather to find out how their subscribers related to the publication.
For example, we (“we” means the phone bank, here) would call, IIRC, 600 subscribers to a certain magazine devoted to housekeeping, cooking, decorating, gardening, and other domestic stuff. We would ask if they’d received and read that month’s issue, and if they had, we would ask them to go get the issue, and go through it page-by-page with the caller.
“Did you read the article on page 15 regarding the proper way to arrange plastic fruit in a table setting? Did you like this article? How likely are you to use something you learned from this article in your own life? Do you like the photo on page 39?” And so on. Bear in mind, the respondents were already subscribers to the magazine, and they were usually glad for the chance to give some direct input.
600 people is a fairly small sample, but the monthly survey was useful not only for the information on individual issues, but for the ability to observe annual trends in readership and their tastes. (Everybody wants recipes around Christmas.) The point of these surveys was not to help the publishing company gather fodder for self-aggrandizing marketing campaigns, but rather to help them accurately judge what their readers did and did not respond positively to.
So that was one type. The other type of survey we did was for the local newspaper. The paper would hire us to do surveys about local current events, find people’s opinions on the topical issues of the day in that state, etc. I can’t speak for other companies, but we were always careful to design surveys with as little bias as possible; despite the personality differences my boss and I had, I will always admire her total professionalism on this point.
So we’d hire a phone bank, again, to call a bunch of randomly-generated phone numbers around the state; the numbers were screened somehow to exclude business phones, and unlisted cell phones weren’t terribly widespread yet in that state. When the caller got someone on the line, they would ask to speak to the adult in the house who had most recently had a birthday (thus randomizing for households in which Mama always answers the phone or some such).
Once the results came back, we’d prepare a big report for the folks at the paper on the major trends, and provide them with a massive file of all the raw data if they wanted to look up something else. Then every few days over a period of months, the paper would publish an article on the results: “Here’s what state residents think about issue X.”
The really interesting part of all this came during elections. We did one of these polls not long before an election, and included questions about which candidate people planned to support, how they planned to vote on various referenda, etc. And it was astonishing to see how a survey sample of under 2,000 people (I think it was 1,400, but it’s been a while) would perfectly mirror the actual voting results in a citywide or even statewide election.
George Gallup once pointed out that it doesn’t matter how big your soup-pot is; as long as the soup is well-stirred, it only takes one spoonful to know what the whole pot tastes like. In this context, of course, stirring the soup well means getting a representative sample of the population for your survey.
I’m not sure if this clarified any of the OP’s questions, but it may provide some insight into what goes on behind the scenes, at least at a company that’s honestly trying hard NOT to skew its findings in any one direction.