You missed the part about the candle reversing the pull of gravity inside the ear.
This really is a straw man, because AFAIK no such mechanism was proposed to explain the action of Salix bark. Being an oral medicine, there was a strong supposition that a chemical component was at work, which was easy enough to research, leading to the discovery of salicylic acid and its derivatives. You’ll find time after time that this has occurred with botanical remedies. I think you don’t give medicine (as an art, not a science) the credit it deserves, because I’ve known many doctors who would without hesitation prescribe remedies that actually work, even if the exact mechanism isn’t understood. The criteria is, however, that they must actually work, must make a clinically observable difference. Ear candling just doesn’t fit into that category, or at least not firmly enough for someone to justify spending their time working out why.
You seem earnest enough here, but my patience is beginning to wear thin with your insistence that **something **simply must be happening. There are a number of firsthand anecdotes of people who say something happened to them. There are no objective observations or tests that these things actually happened to the people… no hearing tests, no clinically apparent observations. All we really know is that a few people vocally claim something changed, something that is either not measurable, or that they didn’t choose to measure. It just “works”. This screams “placebo”. Placebo effect can be classified as something happening, it’s a real phenomenon, and there’s no evidence to suggest any other force is at work.
Good one
YES! And none that they didn’t. Why can’t you just leave it at that?
[Following this post: a bunch more justification why cynicism is what all the cool scientists do.]
Alex_Dubinsky, what kind of evidence would be sufficient for you to come to the conclusion that ear candling, as described by its proponents, is BS?
btw, Musicat. How convenient that when you replied to me saying “As if you’re not basing your thinking on probabilities” with “of course I am basing my thinking on probabilities,” that you didn’t quote or address my following sentence, “As if your cynicism doesn’t affect, even somehow, them.” Musicat, it seems you realize you are human. Just long enough to remove the thought from your head.
Good question.
Full confidence would require not just a sizeable sample but an investigation across a spectrum of variables, such as frequency of application, materials used, techniques of application, and medical history. Following this investigation you’d need to analyze the data for patterns and follow up on interesting correlations. (A wide sample across many factors is not enough. A number of people who are helped may very well be offset by a number of people who are hurt, and hence yield a “does nothing” result… a result that’s not the truth.) That, indeed, is what’s required for full confidence.
To get a cheaper but less confident answer I would do all of those things but use a smaller sample size. Trying to reduce the variability in the variables (as per the official scientific method) I would NOT recommend, but I don’t want to get into a debate on that now (suffice it to say, the controls-centric method is great for technical sciences governed by precise math, horrible for medicine and biology). The randomness in the resultant data (measured simply by standard deviation or whatnot) will tend to point to whether the sample was sufficient or what conclusions can be drawn. Analysis for patterns can’t be ommitted but can be likewise scaled down.
If you redefine the question as “does something help the majority of people in a significant way,” it will be a lot easier to answer than “does something help at least some people in at least some way” and will require less data, but I would use the same techniques. (Ie, to have as few controls as possible.)
Either way, if with the right mathematical analysis we can simply say, eg, “75% chance the effect doesn’t apply to more than 15% of the population in a way that’s significant with 2nd-order probabilities (ie, how certain we are it IS 75%) also at around 70%,” then we can actually get somewhere. What numbers would be sufficient for me to “make a conclusion”? I think the ones I just stated would be more than enough to say “no” to the question “do ear candles work well,” but not enough if you want to get anal about it. I think this confidence can be achieved with a single study of less than 100 people, given the right methods.
Most of you have probably balk at this point and are saying how well we all get by with giving about 20 people the same dose and calling it science well done. Well, we don’t. The countless studies that contradict themselves are perfect proof of that (studies that investigate far bigger questions than ear candles). But if instead of sitting around taking the limits of science for granted, you’d all actually acknowledged its insufficiency. If you helped to find new techniques, even if they made you more painfully aware of what you’re missing with your limited resources. Then we’d get somewhere, and know what we do know and what we don’t.
If I had to draw a conclusion with the shitty studies we have now… well, it’d be much harder for me to draw a conclusion. Perhaps I’d like five studies of ~50 people each, but it really depends on how the variables (dosage, application technique, etc.) varied between them.
btw, you asked me about a conclusion. The requirements to just swing my assessment about probabilities would be much less. But to get to the certainty that you guys are at…
See, that little clause right there covers your ass (and every body elses as well). I am more than happy to say that ear candling, as described by its proponents, is a crock of shite (as are many of it’s proponents). However I can’t say that sentence with 100% honesty about my feelings, about my opinions and about my experiences without including the requisite disclaimer.
I know I’m beating a dead horse here and it’s starting to draw flies so I will leave it be. But cynics be forwarned (fivewarned even), I didn’t come to this rodeo to dance with my grandmother so don’t expect me to do the two step.
Oh, and Cosmic Relief, I understand what you said about your patience wearing thin. Sometimes I can really be the Jenny Craig of patience, especially when frustrated.
And finally, I’m not saying that’s what must be done. Ie, I’m not saying that ear candles are that worthy of investigation. What I definately am saying is that just because they’re “not important” isn’t suddenly license to make conclusions with crappy evidence.
oh crap, nd_n8, I didn’t see that clause. Yeah… wtf.
Alex_Dubinsky, sample size is important, sure, but let’s zero in on an actual test design. How should that test be designed, and what would constitute a success or failure?
First, we need to define what is being tested. Just one thing at a time. And let’s pick one claim of the proponents that you feel has a chance of being true.[ol][li]Define just one claim, []How can we test it, and []What result would consitute a success or failure?[/ol][/li]Let me try with a very hypothetical example. Claim: ear candling improves hearing. Procedure: have N randomly chosen subjects take an audiometer test, then be subjected to an ear candling procedure, then take the audiometer test again. A similar number of controls will do the same but omit the ear candling part (or be subject to a “fake” procedure, whatever that might be).
Now, before this test starts, what would amount of hearing improvement would you require as proof that the candling had a positive effect? 10%? 50%? And how many subjects should have this improvement? And would the hearing ability measured be a composite of all frequencies or just one? If one subject had a decrease in hearing acuity, how would that be taken into account?
Let’s take another example. Claim: ear candles remove ear wax. Procedure: dividing patients into two random groups, have each examined by a doctor for earwax values. (I don’t know how docs define amount of buildup, but there must be some scale, say 1-10). Half of the patients undergo ear candling, the other half, none (or a fake procedure). Then re-examine all ears without the docs knowing which group a patient is from.
If the patients who underwent ear candling have an average improvement of 30% less wax, would that be a successful result? If not, what would be the value you would use?
Remember that an improvement in such a test would have to be statistically significant, and the successful outcome criteria must be defined in advance. So a 3% improvement would not be scientifically valid, and fishing for a positive result in the stats after the test would not be the way to go, unless you want to be laughed at.
Also remember that in the above examples, we have not controlled for the placebo effect, so your proposed improvement values must be improvements over the control group, not just over no test at all. (It’s entirely possible that all test subjects show an improvement because of the psychological factor that accompanies them dudes in the white coats.) Alternatively, and best of all, we would use 3 groups: ears that are candled, ears that are “fake” candled, and ears that have nothing done to them at all (active ingredient, placebo, controls).
Better yet: since each subject is blessed with 2 ears, we could divide the groups by ears, not by bodies. A bonus – we have twice the subjects!
Your thoughts?
It’s obvious you are not going to change your mind about ear candling. It’d be nice, though, if you developed some inkling of the scientific method and requirements for acceptance of claims in the scientific community. For instance:
- It is incumbent upon those postulating a theory to provide evidence backing that theory. It is NOT incumbent on others to prove that person wrong. Whether the theory involved is rational or totally goofy, the fact that no or insufficient experimental data exists regarding the theory does NOT validate the theory or mean that it must be viewed in a serious light. I’m not making this up. It has long been the accepted basis on which we increase our store of medical and other scientific knowledge.
The ear candlers and their backers/apologists are free to conduct quality studies and publish the results of their research. Assuming that golden day comes when reproducible research emerges to validate the ear candlers’ claims, I will publicly apologize to them in this forum.
Until then, I am free to scoff.
The proponents of woo are fond of saying “Absence of evidence is not evidence of absence”. The counter to this statement is “Absence of evidence means that you haven’t got shit.”
Well put, Dr. J.
Hello … I’ve been trolling the SD boards for years and finally paid for my membership just to point out that I do the opposite of all of the good, Western advice, and have always had excellent hearing, and never have a problem with my ears.
Every day, after a hot shower, I gently clean my entire ear canal with a cotton swab.
I listen to earbuds loud enough to block out ambient sound at work, and go to loud rock concerts fairly frequently (3-5 times per month). We’re talking Rush, Joe Satriani, Mighty Mighty Bosstones, Bad Religion, ZZ Top, Rage Against the Machine.
At age 38, having been going to concerts since I was 16, and having used cotton swabs daily since a child, I’ve only had doctors give me a happy smile when they look in my ears with an otoscope. I’ve never needed an irrigation of any sort, and never get infections due to the lack of wax. Obviously I don’t shove more down into the canal with the cotton swab or, after all this time, I’d have quite the buildup.
My hearing? Superb. I almost always hear sounds other people around me can’t. In restaurants or grocery stores, I can hear the song that’s playing over the din around us, and other people can’t. I can hear quieter noises from further away than anyone I know. I haven’t had a audiometer test since I was a kid, however, so this is all subjective - but I honestly don’t know anyone who hears a sound before I do.
So obviously candling is not for me. I’m just pointing out in an anecdotal fashion that there is “alternative treatment” for wax buildup. I’m sure your doctor would recommend against shoving a cotton swab in your ear, and I do too, but it works for me. The hot shower may have something to do with it, too – the heat and steam probably loosens up the gunk so that the cotton swab can pull it out?
Anyhow, now that I’m a subscriber, you can probably expect to see me posting a bit here and there, so … hi!
Welcome to the SDMB, but I hope you meant “lurking”, not “trolling”.
It’s a subtle but significant difference, my friend.
Most people don’t get enough buildup to be a problem, and that is probably the case with you. For me, this procedure would accelerate the issue and make removal of the impacted plug risky.
Musicat… you’re splitting hairs. Once we get data of what it does (eg, help hearing 5%), then we can bitch at eachother over how much anyone should care. I meant it’d be interesting to have a gentleman’s understanding before the study, so that we could really document whose predictions were what (ie, rub the results in eachother’s faces). But if we actually get to the point where we know ear candles help hearing 5%, it’d be a whole different discussion.
“Remember that an improvement in such a test would have to be statistically significant” – Right, to be confident there’s a 5% improvement you’d need to do a lot of tests.
“and the successful outcome criteria must be defined in advance.” Really, research should have nothing to do with ‘success’. “and fishing for a positive result in the stats after the test would not be the way to go, unless you want to be laughed at.” I think the problem is the way research is often done, with people working on theses or investigating pet suspicions. Research should be about the data. It’s later that assholes like us should get to dissect it. This is something that really pisses me off: When a paper is written it gets replies and commentary, but after the initial dust it’s only those same, old, unedited, unreflected-upon conclusions of the original researchers that get cited. You, child of the internet, dweller of the message boards, replier to the OPers. You know what I’m getting at here, right?
And as for testing one claim at a time: That’s certainly the way to go to get a confident, but specific result. In a technical science, what you’d have is a mathematical theory with predictions. You’d test just one of those, to the maximum precision of the math itself. If it was wrong, you could tear down half the theory right there.
But when you’re merely investigating, this official scientific method doesn’t make any sense. Certainly by testing one claim you’d get more statistical significance. You could announce to the world your confident, albeit narrow findings. But, assuming ear candles have ten possible claims (or ten formulations or ten application techniques, etc.), you’d barely know 10% of the truth. If in contrast you did a ‘shotgun’ and only got one-fifth to knowing enough about any individual claim, you’d still actually know 20%. You could even confidently say something like, “none of the 10 possible things ear candles could do are done very well.” And that would stand for something in a discussion.
Listen, nothing is incumbant upon anyone. This isn’t white-collar court. Scientists are people scurrying about to find the truth. They’re actively searching for it. They don’t sit still as people come up and try to convince them of things. You’ve got a wrong view of science.
The way it works [ideally] is this: There are a lot of ‘leads’ (ie claims) for a scientist-detective to pursue. Because every lead has something going for it, there’s a meritocracy and the best leads get the attention. The other leads are regarded, by self-conscious investigators who wished they had the time, as “we truly don’t know.” This is slightly different from how you see it, where there are no leads but only allegations.
You and Musicat are both talking about “research,” but you’re talking about different phases of a research process.
You’re talking about an earlier phase, sort of a “let’s mess around in the lab and see if we find anything interesting.” There’s nothing wrong with that, but don’t confuse it with the later phase, which Musicat is discussing, which is, “OK, we found something interesting; let’s construct a rigorous study to see if there’s really an effect here or not.”
Positive results in the earlier phase are nowhere near as peruasive as positive results in the later phase. The very nature of “let’s mess around in the lab and see if we find anything interesting” makes it likely that, if you test enough things, you’ll see some positive and some negative correlations just through experimental variability.
Well, yes and no. You’re stripping any sense of plausibility from the scenario. For example, here’s a hypothosis: If I dig a deep enough hole in my yard, I will eventually find substantial sums of money. Now, I truly don’t know if this is true or false, but my sense of plausibility makes me think it’s highly unlikely to be true. I could detail just why I think it’s unlikely, but hopefully you kind of intuitively understand. The “meritocracy” in this case doesn’t make me “wish I had the time” to test this theory, it makes me think that any time spent on this is time wasted. Now, if my neighbor finds a vein of silver is his backyard, this might change, but for now it’s just a goofy theory that I’ll spend no more time on.
Actually it’s you who has a wrong view of science. The default position of science is not, as you say, neutrality toward any given belief. The default position, given the absence of proof, is absence of belief (note that I didn’t say negativity - just absence). Given a claim for which there is no evidence (such as ear candling), then the rational position is absence of belief. Without any evidence, strong belief in its effectiveness is an irrational position, and it’s not out of order to call it what it is… an unsupported irrational belief. Extraordinary claims require extraordinary evidence. It is, indeed, incumbent upon believers to demonstrate evidence.
And, contrary to your assertion, scientists do sit around waiting for some reason to investigate every purported phenomenon in the universe of potentially true but apparently silly beliefs. There simply isn’t enough time, energy, nor resource to give equal credence to all of them. I don’t have to justify why I do not share your belief that there is an invisible pink dragon in your basement. You have to demonstrate good evidence as to why I should (other than your own testimonial). Likewise with ear candling.
You’re right, I was lurking. I’d posted a couple of posts on my guest account a year ago, and later realized that I had made a post on another guest account in 2006 (as Cheeop) which was quoted in a Cecil article. I doubt those posts qualified me as a troll though.
So now I’m done lurking and am glad to join in the festivities here!