I was esp. struck by this old Cecil column (although this is not a “Comment on Cecil’s Column”). In it, Cecil writes:
Anyways, my question is simple (and only generally related to this column): Why didn’t doctors (to say nothing of even just laymen of the past) not realize that these methods (bleeding, etc.) didn’t work?
People who got these treatments didn’t get better. They usu. got worse. Wasn’t this apparent, even in less scientifically methodical periods of time?
George Washington, for example, was probably bled to death. He had a cold. Losing more fluids was probably the last thing he needed.
Over the last few years, I have been dismayed at how much medical advice isn’t as evidence based as we would assume. It’s way better now obviously but there is still a ways to go.
Check out ‘Humoral Theory’ - it isn’t that they were specifically removing blood [though it works for a couple of conditions, one is the superabundance of platelets, the other is high blood pressure] they were removing ‘fluids’ from the body - which ever one was in overabundance and causing issues.
And oddly enough , leeches are great for reducing hematomas and helping circulation in areas of the body that are having clotting issues. You can actually buy USP leeches :eek:
Related question - when did the ‘scientific method’ actually start? I think that is the term, I mean running controlled experiments to observe what happens when you do X to one group and Y (or nothing) to another.
Except that, quite often, they DID. The ailment passed naturally, some incidental part of the treatment helped, they weren’t as sick as everyone thought, the treatment worked, but not why they thought it did, etc etc etc. Without being trained to look for other variables, well, everyone who got better when they were treated was evidence the treatment worked, wasn’t it?
And when it didn’t work? Well, no treatment has a 100% success rate, even nowadays, and, again, without the training in modern scientific methods, confirmation bias reigns supreme - remember the successes, forget or minimize the failures.
Nobody’s mentioned regression to the mean. What this refers to is things tend to get back to normal. In other words, people would have gotten better anyway. If they did something just before they got better - hey! That must be because of what I did! This results in falsely attributing effectiveness to treatments.
It’s really hardly better today. I find most people vastly overestimate the effectiveness and understanding of modern medicine. There is so very much we just don’t know yet.
Most FDA approved drugs work in only a minority of patients who take them. And when I say most, I mean most - 90% is the estimate I hear most often. 90% of drugs only work in less than 50% of the patients who take them. Alzheimer’s drugs, all together, work in only about 10% of people who take them (page 14 of this pdf).
Does this mean we should chuck out the drugs entirely? Of course not. But it does mean that we should be aware of the limits of even modern medicine, and not be afraid to speak up to our nurses and doctors if something doesn’t seem to be working. Chances are quite good that it’s not, in fact, working, and we should try something else.
Because when they do work, they are lifesavers. And that’s a good thing.
It’s the same reason that so many people today still believe in pseudoscience. My brother is convinced that zinc supplements prevent colds, even though there’s no proof that they do anything.
There’s also the fact that the human memory system and brain are built to find patterns even where there are none.
This accounts for gamblers who look for patterns so that luck will finally turn their way, people who don’t notice that their psychics get almost everything wrong when they even bother to make an actual prediction, etc.
I think your effectiveness numbers are skewed by including drugs for all sorts of difficult to treat conditions (including cancer and Alzheimer’s); if one looks at specific medications (for instance insulin to treat diabetes, antibiotics for susceptible organisms, blood pressure and anti-cholesterol drugs etc.) the rate of effectiveness climbs markedly.
As others have noted, belief in the efficacy of bleeding survived as long as it did among practitioners partly because of a mistaken theory of disease and inevitably flawed personal observation and testimonials. People got better oftentimes in spite of harsh treatments like bleeding, and their cases were pointed to as proof of the remedy’s usefulness. We see the same confirmation biases at work today across the spectrum of alternative medicine, but unfortunately sometimes in the perceptions of M.D.s as well.
You also need to bear in mind that modern statistical techniques only date back to about 1900. In 1850, random sampling and control groups were still close to a century away. In other words, physicians of the day didn’t have the tools to make determinations, even if they wanted to.
I think this must be it- if you look at common and proven categories of drugs combined with common ailments, they definitely work- things like antibiotics, antihistamines, blood pressure medicines, diuretics, etc… and a lot more than 50% of the time.
Now if you’re talking about something like Alzheimer’s, Parkinson’s, some kinds of cancer, etc… and all bets are off.
I agree the human body is extremely complicated and people will get better or worse no matter what you do so it takes a lot of patience (and patients) to determine if a treatment is helping.
You’re off by about a century. Now, the use of statistics in medicine? That’s still sorely lacking (mind you, it’s still sorely lacking in many other fields).