You very rarely (if ever) hear of a study done by some advocacy group which does not support their cause. This applies whether they themselves did the study or whether they outsourced it. How does this come about? Several suggestions:
[ol]
[li]They structure the studies to focus on areas that they know support their case, or on aspects of the issues that support their case, and deliberately avoid studying areas which might undermine it. (E.g. an anti-immigration group might focus on immigrant crime rates and ignore anti-immigrant hate crime rates, and a pro-immigration group would do the opposite.)[/li][li]They just refrain from publicizing those studies which don’t end up supporting their cause.[/li][li]They spin heavily when they release the results, regardless of what the data shows, and few people dig through it.[/li][li]When outsourcing, they use groups which have known positions/biases on the issues, and thus ensure the results they’re looking for (in this case presumably the study group would be using some of these techniques).[/li][/ol]
Or all of the above?
Another factor is the the media in the US is really bad at statistics, and needs a catchy headline more than a realistic set of facts. So my WAG is that it is mostly 2 and 3.
Yes, but that’s not the most common way to do it. So many studies are done in so many different areas it’s not hard to find someone else’s work to use. Of course when you get Wakefield type fiction it will get cited many times, even long after it’s totally discredited.
Other popular strategies are to carefully choose where the studies are conducted and what people will be included. Also how questions in polls are phrased.
The Heartland Institute. If you’re a tobacco company that wants a “scientific” study that says there’s no link between smoking and cancer, these are your guys.
They can gather data, report aspects of it and refrain from asking certain questions. Those questions could shape different data gathering aspects as well as different interpretations of the same data. FWIW and by way of example, I perceived a little of this in the widely reproduced citation that immigrants don’t cause crime. I think there’s a more complex case to be made in their favor, but it involves some weighing of evidence. Academic paper, not advocacy study.
Then there are the hack pieces. Frame your study narrowly, interpret it broadly and ignore obvious objections.
If you stick to only a few examples of pseudoscientific/badly-conducted research and flog the hell out of it (while condemning the vast body of scientific literature that contradicts it as fraudulent/corrupt*) you can give the impression of scientific validity (antivaccination and anti-GMO advocates are fond of this approach).
The proliferation of open-access and poor-quality specialty journals has helped energize this phenomenon.
*These and other groups simultaneously tell us that science is heavily beholden to special interests and cannot be trusted, while holding out a handful of studies and saying “Lookit! I got science on my side!”, which is somewhat bewildering.
**Even more entertaining is their practice of citing a journal study which argues that much of scientific research published in journals is unreliable, and not perceiving any irony.
Sometimes what the groups are advocating for is a no-brainer, and current policy just prevents it from happening.
Marijuana advocacy groups - I ain’t saying the mary jane cures cancer, but it’s obviously a safe and effective anti-anxiety and pain medication.
An advocacy group for more education in STEM.
An advocacy group for less gun violence. (all the credible studies seem to show that if you make guns less available, less people get shot. Weird how that works)
This is it mostly. Keep sifting through the research or fund your own study until you get the results that you want. It is very bad science but is an unfortunate result of the way funding and grants work. You have to know most of the answer to the issue you are asking to research before you get the money to start.
And also what Shodan says. Controversy sells news. And news is a very money driven business, actual unbiased reporting disappeared long ago, it is now about page views and hits. How many eyeballs can we grab.
“Study finds that dogs like humans, and people like dogs!” Little air play.
“Study finds that dogs think humans are assholes! And treat them like slaves.” Reported on every news outlet until something else comes along.
In my experience in the mystical world of public relations, it’s almost always 1 and 2.
It’s easy to understand simply burying negative results. The trick is to come up with data that’s narrowly accurate (make the numbers lie and someone will figure it out) but use that to lead to a broad conclusion.
Think of Senator James Inhofe showing off a snowball to prove global warming didn’t exist. A snowball can exist even while there is global warming. The trick is to come up with a study that proves A and convince people that A=B.
Conduct a study or poll that uses one word or phrase, then when reporting the results, shift to a more attention-grabbing headline. For example, conduct a poll asking women at college whether they’ve ever experienced “unwanted sexual activity”. In the poll, be clear that this includes things such as unwanted kissing, and that it includes attempted sex crimes as well as those that were actually committed. Get almost one-fifth of respondents to answer yes, then go public with headline-grabbing claim that one fifth of college women have been raped or sexually assaulted.
Deliberately jump from data to a conclusion that the data does not support. For example, take data showing that women earn less than men on average, jump to conclusion that women are discriminated against. Omit information that would undermine your conclusion, such as that women generally choose lower-paying professions than men.