The above link is to a Washington Post story covering an article published in the New England Journal of Medicine in 2001.
I don’t know if anyone is still reading the thread, but I did read the article and thought I’d post some thoughts about it.
First of all, there were about a dozen letters to the editors published in response to the article… that’s a lot. Many of the letters pointed out methodological issues, most of which I won’t summarize here. There was at least one important issue not commented on, as far as I noticed in a somewhat hurried read-through.
There are also loads of articles that reference the piece, which is a lot for only 2 years or so.
To summarize, the article is a meta-analysis-- that means it summarizes the results of many studies in an attempt to synthesize all knowledge on a topic. In this case, the authors summarized across all studies they could find which had a placebo arm and a ‘no-treatment’ arm (and met other reasonable criteria). About 10,000 people from 114 studies published between 1948 (approx) and 1998 were included. One advantage of meta-analysis is that it allows the pooling of information and subjects across many small and large studies. A disadvantage is that to the extent that the studies are measuring different things, collapsing them together doesn’t make a lot of sense.
I think that is the major problem here. The researchers summarize over trials of smoking cessation, hypertension, (extremes of differentness, possibly) and 38 other medical conditions. The unspoken hypothesis of the study, therefore, is that placebos have the same effect regardless of the medical objective being considered. I don’t think this makes sense, and several letters commented on it. The point not specifically addressed in the letters is that the medical treatments in the included studies may also not have worked. I doubt that a proponent of placebos would argue that they should generally work even when nothing else works, but that they may sometimes work almost as well as real treatment. Other issues pointed out by the letters include the fact that many of the no-treatment arms had plenty of patient-doctor contact, and that this may be the mechanism of placebo, so that the comparison is tainted.
In addition to the methodological isues skimmed over above, the authors, IMO, biased they way their results were perceived by making dichotomous endpoints the primary discussing point of the paper. (Dichotomous in this case means yes/no.) They thus focussed their attention on whether the smoking cessation was successful in more placebo or untreated patients, rather than the number of cigarettes smoked, which is the continuous version of that outcome. (If you’re new to this stuff, continuous means measured, loosely, or counted. Another example could be hypertension. Dichotomous: do you have high blood pressure? Continuous: What is your systolic blood pressure? The smoking and hypertension example also show that you can change people for the better (lower BP, fewer cigarettes) without making that dichotomous measure switch.)
I can’t see why they did this except that it supported their preconceived notion that the placebo effect doesn’t exist. (Or, even more cynically, that they would get more attention for shooting it down than supporting it.) For example, more people were involved in studies they were unable to generate continuous results for than for the studies with continuous outcomes. Each of the continuous results (except one) showed a clear advantage for the placebo, while the dichotomous did not, generally. So their results actually show that the placebo works!
As a final point, the authors in many places discuss the fact that the power of some tests that failed to show what they wanted was small. (Statistical power is, loosely, the probability you reject the null hypothesis, given a particular alternative is true.) However, they fail to address the fact that the ‘main’ result about the dichotomous outcomes were in fact close to being statistically significant, suggesting that the power for their main outcome was lacking, and that more subjects could in fact prove that the placebo effect exists, even in the dichotomous case!
The Washington Post actually did a decent job: they reported the results of the study as the NEJM allowed them to be published. On the other hand, the NEJM got totally hoodwinked, (or more cynically, got desired coverage even though the science was mediocre), and the peer-review process once again showed its frailty.
The placebo effect is alive and well, and the authors of the article go on the list of ‘scientists’ whose credibility is highly suspect.