I read the NEJM article :
Hrobjartsson, A and Gotzsche, PC. “Is the Placebo Powerless? An Analysis of Clinical Trials Comparing Placebo with No Treatment.” The New England Journal of Medicine. 344(21), pp. 1594-1602.
and the editorial :
Bailar, JC. “The Powerful Placebo and the Wizard of Oz.” The New England Journal of Medicine. 344(21), pp. 1630-1632.
Quite interesting really. As mentioned above, Hrobjartsson and Gotzsche plowed the literature for clinical trials involving both a placebo group (the sugar pill) and a no treatment group. They found 114 (as both a placebo and a no treatment group is relatively rare). They covered 40 conditions, ranging from Alzheimer Disease, pain, and Parkinson Disease to fecal soiling, seasickness, bedwetting, and nailbiting. 82 of the studies had continuous outcomes (“On a scale of 1 to 10 how much does it hurt?”) and 32 had binary outcomes (“Do you crap your pants?”). These were further divided into subjective versus objective responses (“Does it hurt?” versus “Is your blood pressure below 150 systolic?”).
The results indicate that there was no difference between no treatment and placebo in 3 of the 4 groups :
-subjective binary outcomes (no pain versus pain)
-objective binary outcomes (below 150 mm Hg systolic BP or above 150 mm Hg systolic BP)
-continuous objective outcomes (what is your systolic BP?)
In continuous subjective outcomes, especially those dealing with pain (What is your pain on a scale from 1 to 10), there was a significant difference. This seemed to go away, however, as the size of the trial increased.
Some main points :
-
Placebo is hard to define. They have a long complicated definition which basically describes all of the placebos in all of the trials they looked at. This encompassed sugar pills, sham surgeries, and “attention placebo” for psychotherapy by which the patient and the doctor talked aimlessly for half an hour.
-
Regular definition of placebo effect (difference between the experimental group and the placebo group) does not take into account the course of a disease. For instance, if 30% of people get better in a week without treatment, then placebo effect at one week will look sometimes like 30%, even though it is just people naturally getting better. This is why inclusion of a no treatment group is so crucial.
-
Poor experimental design can cause some bias, which is why larger trials seem to get rid of some placebo effect that occurs. This includes inappropriate placebos to sample size.
-
It is also possible that smaller trials addressed issues in which placebo effect was more pronounced. This is sometimes because large scale pain control trials are difficult to perform (as conditions are hard to standardize, for instance.)
They discuss the data quite extensively. Basically, what it works out to is that placebo effect on some subjective things like pain turns out to be pretty big – 1/3 of NSAID effect on pain, for example. This effect may vanish with good experimental methodology. Placebo groups may be of limited use except where experimental methodology is tightly controlled and properly designed. The noted placebo effect of previous studies may be due to disease course.
To note, a widely publicized recent study on lower back pain documented no difference between physical therapy, lumbar laminectomy (surgery), and reading a brochure about lessening lower back strain.