The Placebo Effect: Urban Legend?

There was an interesting piece in the New York Times this weekend which suggests that the placebo effect may not exist. (The article, entitled “Putting Your Faith in Science,” can be found on-line, but you have to have a free Times account.) Apparently, a couple of Danish researchers published a recent piece in the New England Journal of Medicine in which they relegate placebos to urban legend status, despite conventional medical wisdom that about a third of patients get better when given placebo pills:

Not knowing the precise methodology of the Danes’ meta-analysis, does anyone wanna venture an opinion about the validity of their findings? Not surprisingly, the conclusions have been met with some resistance. It strikes me as amusing that the medical community, in this case, could potentially be alternately characterized as being overly skeptical (of research which contradicts firmly-held beliefs) and not skeptical enough (of an effect for which there may be little concrete scientific evidence). Thoughts? I’d especially like to hear from the doctors on this one.

Just out of curiousity, any idea on whether the distinction is being made between self-reporting and something more testable?

No idea at all, Myrr. :slight_smile: Anyone got a copy of the relevant issue of NEJM?

I probably do, Gadarene. I’ll read it sometime tonight or tomorrow.

I think that the placebo effect much depends upon the patients belief that thay are, not may be, getting the real drug.
Was this idea taken into account by the researchers?
Peace,
mangeorge

I read the NEJM article :

Hrobjartsson, A and Gotzsche, PC. “Is the Placebo Powerless? An Analysis of Clinical Trials Comparing Placebo with No Treatment.” The New England Journal of Medicine. 344(21), pp. 1594-1602.

and the editorial :

Bailar, JC. “The Powerful Placebo and the Wizard of Oz.” The New England Journal of Medicine. 344(21), pp. 1630-1632.

Quite interesting really. As mentioned above, Hrobjartsson and Gotzsche plowed the literature for clinical trials involving both a placebo group (the sugar pill) and a no treatment group. They found 114 (as both a placebo and a no treatment group is relatively rare). They covered 40 conditions, ranging from Alzheimer Disease, pain, and Parkinson Disease to fecal soiling, seasickness, bedwetting, and nailbiting. 82 of the studies had continuous outcomes (“On a scale of 1 to 10 how much does it hurt?”) and 32 had binary outcomes (“Do you crap your pants?”). These were further divided into subjective versus objective responses (“Does it hurt?” versus “Is your blood pressure below 150 systolic?”).

The results indicate that there was no difference between no treatment and placebo in 3 of the 4 groups :
-subjective binary outcomes (no pain versus pain)
-objective binary outcomes (below 150 mm Hg systolic BP or above 150 mm Hg systolic BP)
-continuous objective outcomes (what is your systolic BP?)

In continuous subjective outcomes, especially those dealing with pain (What is your pain on a scale from 1 to 10), there was a significant difference. This seemed to go away, however, as the size of the trial increased.

Some main points :

  1. Placebo is hard to define. They have a long complicated definition which basically describes all of the placebos in all of the trials they looked at. This encompassed sugar pills, sham surgeries, and “attention placebo” for psychotherapy by which the patient and the doctor talked aimlessly for half an hour.

  2. Regular definition of placebo effect (difference between the experimental group and the placebo group) does not take into account the course of a disease. For instance, if 30% of people get better in a week without treatment, then placebo effect at one week will look sometimes like 30%, even though it is just people naturally getting better. This is why inclusion of a no treatment group is so crucial.

  3. Poor experimental design can cause some bias, which is why larger trials seem to get rid of some placebo effect that occurs. This includes inappropriate placebos to sample size.

  4. It is also possible that smaller trials addressed issues in which placebo effect was more pronounced. This is sometimes because large scale pain control trials are difficult to perform (as conditions are hard to standardize, for instance.)

They discuss the data quite extensively. Basically, what it works out to is that placebo effect on some subjective things like pain turns out to be pretty big – 1/3 of NSAID effect on pain, for example. This effect may vanish with good experimental methodology. Placebo groups may be of limited use except where experimental methodology is tightly controlled and properly designed. The noted placebo effect of previous studies may be due to disease course.

To note, a widely publicized recent study on lower back pain documented no difference between physical therapy, lumbar laminectomy (surgery), and reading a brochure about lessening lower back strain.

The purpose of treating patients with placebos is to eliminate bias in clinical trials. The same principle is used in scientific experiments; Only Change One Variable at a Time. Patients should be treated identically, except that the control group does not get the actual treatment!

If the placebo has no effect it does not matter. However, in some cases the placebo could have an effect. The idea is that any effect it may have subtracts from the efficacy of an actual treatment.

So, if patients got 50% improvement from a certain treatment, but control patients given the placebo also got 50% improvement, then the treatment can be considered ineffective. Alternatively, one could argue for a 50% ‘placebo effect’, but this is not the aim. The placebo is given merely to prevent 50% improvement being attributed to a useless treatment. If patients receiving nothing also improved by 50%, that’s OK too (and should be expected!).

Belief in the ‘placebo effect’ would be dangerous if doctors wished to exploit it in place of normal medicine. But placebos should not be used for this purpose. If a placebo is shown to be actually effective, then the mechanism should be isolated and developed into a proper treatment.

There are probably many examples where ‘placebo effect’ is caused by natural variations in patient response, and is identical to no treatment at all. But there are definitely some cases where the placebo does have an effect (eg. in certain psychiatric treatments the plecebo works better than the drug being tested!). Because of this it is important to maintain the use of placebos in trials. It might also be useful to find out why the placebo seems to work.

‘Placebo effect’ acknowledges the fact that the mind and body are not separate, and that a patient’s belief in a cure can effect his apparent wellbeing. It may be that the patient only feels better (but really isn’t) or he may have changed his behaviour to cause an actual improvement. Of course, any claim that a patient given a placebo gets better by mind power alone, or that a neutral pill has magical curative powers, should be approached with skepticism.

Apart from the tendency of major news media to take a single journal study and regard it as the Revealed Word (and the NEJM has the status of an oracle in some circles), meta-analysis itself has an uneasy and controversial reputation.

One prominent and hotly debated meta-analysis looked at the effect of antidepressants. You may be amused to find that in this instance, meta-analysis argued for a huge effect for placebos.

In any event, placebo effect has been documented so repeatedly that it is not surprising that there would be skepticism in the medical profession about a single study of this type that apparently claims to refute the whole concept of placebos. I will be interested to see exactly what the authors are claiming, and how they account for the marked and occasionally fatal side effects caused by placebos.

I’m all for any methodology that improves testing of drugs and therapies. But using placebos in double-blind studies has been so useful that dumping them based on this slim evidence would be foolhardy.

Harry and Jackmannii,

I heard an excellent report on this on NPR. And the point the authors of this study make is not that the placebos are useless for (and thus shouldn’t be used in) doubleblind studies!

Their concern is that doctors have come to believe in the “placebo effect”…i.e., that a patient is more likely to improve if they believe they are being given something that is (or may be) an effective medicine. And, that doctors are in fact using this. An example they gave is a doctor prescribing an antibiotic to a patient who has some kind of cold or flu, since for the purposes of treating a viral illness an antibiotic is indeed simply a placebo. (Although admittedly, viral illnesses can lead to bacterial infections like bronchitus as a side effect, so this gets subtle.) Of course, doctors may argue that they don’t give the antibiotics for the placebo effect but rather for some other reason like:
(1) to get the damn patient off their back.
(2) to prevent any subsidiary bacterial infection.
(3) just in case their assessment is wrong and the illness is bacterial rather than viral.

Personally, I find it hard to believe that there are not some problems, particularly ones like pain involving a mental component as well as a physical one, that can be helped by the placebo effect.

On the other hand, the point they make is a very good one: That just saying “well, look, 30% improved with the placebo, therefore there is a placebo effect” is insufficient without some attempt to compare this to the number who would have improved simply because the disease (or whatever) would have naturally run its course. Reminds me of the joke: “Well taking our cold remedy will get rid of your cold within 7 days guaranteed, whereas if left untreated your cold could drag on for up to a week!”

(the triple posting, that is.) The SDMB server is screwed up! I’ll gladly throw in my $20 to the “get SDMB a better server fund”!

jshore, I’ll write you a prescription for some triple ointment.* Should fix that triple posting problem in no time. Anyway, it can’t hurt, right? :slight_smile:

I was able to pull up the abstract of the article on the NEJM site and the authors do specifically exclude clinical trials from their recommendations against the use of placebos. The remaining conclusions and the means by which they were reached will make interesting debate fodder for some time to come (remember the recent New England Journal study “refuting” the usefulness of dietary fiber in preventing colon cancer? I don’t think that one study has been accepted as final either, and its methodology was considerably more mainstream).

*aside: this is a pretty interesting site.

edwino, thanks for the superbly informative post. jshore, thanks for saying everything I was gonna say. :wink: I know the difference between the use of placebos in double-blind trials and an actual purported effect that the administration of placebos has on some patients–I’ve been plowing through discovery documents in a class action drug litigation lately, and have become intimately familiar with the tricks and traps of the standard pharmaceutical trial. The Danish study is indeed, so far as I can tell, making no judgment regarding the use of placebos in trials. If anything, the non-existence of a placebo effect would seem to legitimate double-blind placebo trials to an even greater degree.

Jack, in what way has placebo effect been “documented so repeatedly?” You’ve got me curious…

As an aside, somewhat related to the OP, IIRC there is no clinical documentation anywhere that “triple ointment” (Neosporin or the like, containing bacitracin, polymixin B, and neomycin) speeds recovery or prevents surface infection. It’s all placebo.

::scurry, scurry::

In addition to the links listed above that talk about efficacy/risks associated with placebo therapy, here’s another popular press article on the subject.

Among recent reviews:

  1. Thompson WG Placebos: A review of the placebo response. Amer. Journal of Gastroenterology 7/2000 p. 1637
  2. Brody H The placebo response Journal of Family Practice 7/2000 p. 649
  3. Weihrauch TR et al Placebo - efficacy and adverse effects in clinical trials. Arzneimittel 5/1999 p. 385

Abstracts of these articles (and lots more) can be found at the NIH’s Pub Med search site.
Edwino, one surprisingly effective therapy for chronic pain involves smearing your body with lard and lying on a fire ant mound. Guaranteed to make you stop thinking about your bad back or aching knee. :wink:

Thanks Edwino and other posters for your lucid discussion. My wife, Prof. December, is a biostatistician, as is John Bailar, who wrote an editorial in the same volume of the New England Journal. (He’s also an MD.) According to Prof. D, Bailar is top notch.

The editorial is generally supportive of the article, but is more moderate. Bailar makes the case that the use of placebos by physicians is …“not entirely innocuous. They may divert patients from seeking more efective treatments, theymaymask symptoms that need attention, they add to the cost of treatment, and they may have unexpected side effects. There also may be some reason for concern that reminders of illness (in the form of placebos) may make a person less rather than more comfortable.”

Bailar’s last paragraph, begins, “Overall the uncompromising condemnaiton fo placebos advocated by H and G seems to me just a bit too sweeping. In particular, the evidence that placebos might contribute to pain relief may merit their continued therapeutic use when there is reason to think that a patient may benefit…However, I believe there should be a sharp reduction in the prescription of placebos and careful justification for each continued use… At present, I would not want to prescribe or receive a placebo without some reason that was far more specific than weak evidence of some general ‘placebo effect.’”