I don’t have an argument about that: for that answer I simply turn to the science of statistics.
Distortion. I never said strict statistical standards weren’t applied in medicine and social science (they are not as strict as in the hard sciences, but that doesn’t mean that I think those standards are not appropriate for those sciences). What I said was that, by Randi’s arbitrarily high standards, the phenomena discovered in those sciences would be rejected.
Raw sophistry. I ain’t them, they ain’t I, and my arguments are mine and, IMO, correct. Speak to the arguments.
Irrelevant. Nowhere in this thread have I said that Randi “screwed up.” I am talking about the theory behind his standards.
Different argument. Nowhere in this thread have brought up the issue of “anecdotal evidence.” Speak to the arguments here.
This is an irrelvent point, as I was never questioning the FDA’s standards to begin with.
Which is why drug tests run multiple tests, each having such strong confidences, and the results combined to give .05 x .05 x 0.5 etc., since otherwise 1 in 20 drugs will have been approved merely on lucky evidence. Do you really think this is true? Maybe more than 1 in 10 billion drugs do get approved so, but such an incidence is nowhere near as common as that.
Why so? Do you think those phenomena fluked it, and that if you just ran heaps more trials, they’d eventually be proven to be statistical blips? I’m still far from sure you get it.
If Randi was running medical tests, he would try each drug on 5 subjects, and demand that 4 of them were completely cured after 1 dose of medicine. The result must be obvious to any observer. No messing about with expert medical opinion making a subjective judgement that a patient’s condition has improved. Only complete cure will do. Any drug failing that test would be rejected.
That’s rather a stretch, isn’t it? The only way the analogy makes sense is if the drug companies have made the claims of insta-cure (or something else that seems inherently implausible, much like paranormal claims)and Randi evaluates those claims. If the drug company wants to make the looser claim of simple “improvement”, the standards of the test must also be loosened.
Trouble is, “loose” paranormal claims (“I can do this minor feat, some of the time”) are hard to evaluate and, in my opinion, not worth a million bucks.
Non sequitur. It seems that you’ve taken my point, however: that the odds Randi requires cannot be applied to medicine and the social sciences.
That’s the point, not whether the standards in the social sciences and medicine are actually inappropriate. If I were to have an opinion, it would have to be given study by study.
As we have seen, Randi has a tendency to write people’s claims for them. B.C. made a claim that works only 1/3 of the time, and Randi refused to test it. She is forced either to drop out and abandon her claim - in which case Randi declares victory by default. Or else she has to change her claim and say she can do it 100% of the time.
I’ve read claims by various paranormalists, and it’s rare that they claim 100% accuracy.
No, you miss my point. If the scientific establishment demanded that medicine and the social sciences meet higher odds, testing would just be done to those standards. Nothing would get inappropriately rejected.
This is the only objective part of the relevant paragraph, the rest is citeless editorialising. The only objective part of your paragraph doesn’t amount to much.
Randi does not necessarily require “high odds” for his Million Dollar Challenge. However, the “looser” the paranormal claim, the more trials are required to verify that the claim is being met.
If I claim to be able to predict the results of a (fair) ten-sided die roll twenty percent of the time (twice as often as chance), would you believe me if you rolled the die five times and I was correct once? Of course not, because I might have just gotten lucky. Do that same test a thousand times, and if I’ve predicted 200 rolls correctly after that, I’ve got a pretty good claim. Do that same test a hundred thousand times, and if I’ve predicted 20,000 rolls, you can bet your bottom dollar that it isn’t just luck. The more trials you do, the more confident you can be of the results.
The reason medical trials and social science studies can’t always achieve the higher confidence intervals that Randi may demand is in part due to cost. Studies and patient trials are expensive. However, the cost of doing repeated telepathy trials (for example) should be relatively cheap.
Part of the reason that there needs to be a high confidence interval is chance. If Million Dollar challenge tests were set up with only a 96% confidence interval, it wouldn’t take long before someone got lucky and won. The real moral of the story is that stastically speaking, the weaker the claim is the more tests that need to be done to prove it. If you made an extremely weak claim (“I can predict the results of a coin flip 51.3% of the time!”), you’d need oodles of trials to eliminate sheer chance and support your claim.
(I jumped into this thread in the middle, so please forgive me if some of what I’ve said is redundant, I don’t have time to go back and read the entire thread right now.)
I worked in the drug industry in marketing for two years. I’m not an expert on how they design their tests, but I can comment insofar as I have knowledge, which is not an insignificant level.
Somebody needs to start a “Bad Statistics” site.
A. You run “multiple tests,” yes, on multiple persons. You get, in effect, one large sample. The concept here of “multiple tests” makes no sense in the context of drug research (sure, if a company can’t prove efficacy with the sample size they have, which often happens, they may increase the sample size with a “new test”; but if the protocols are the same, then it’s still the same test.
B. What’s really egregiously wrong is your idea that the p scores of “multiple tests” could be multiplied by each other to get a multiplicative effect. In reality, the opposite is true: To narrow the confidence level you need progressively bigger samples, and it is a geometric progression, not arithmetic (reduction in confidence x requires current sample sizey). That’s why in political polls they stick with smallish samples and a margin of error of 3%; to get that error down to 1% or so would require mammoth samples.
If my knowlege were fresher and better, I could explain the above better. Nitpick away. But the basic principles are correct, whereas what you wrote was completely off-base.
By now it’s clear that you don’t know what you’re talking about. But let me add that many drugs do go on the market whose efficacy is later doubted, and a few make it onto the market that are completely unsafe. These things happen precisely because the sample sizes are not big enough, and both drug companies and governments knowingly take the risk. Once the drugs are used by a larger sample (i.e., millions of actual users in the public), then the truth comes out.
I don’t think this thread is concerned with whether the paranormal exists or not. I think that there may only be one person arguing against Randi who really believes in the paranormal. Similarly, I don’t think many of us remaining in the thread are really claiming fraud, just not acting in good faith. No pun intended.
Poor old Peter. You’re just so put upon by us dolts. How do you ever cope? It’s just so hard being the only person smart enough to see things as they really are, when surrounded by fools. It makes me sad, just feeling sorry for you given what you have to put up with from us.
Sigh. Sniffle. Sigh.
But my sincere sympathy aside, the problem with your dramatisation is that you made it up. The resemblance to the BC case is only passing.
And I still don’t get the relevance of the fact that paranormalists rarely claim 100%. What percentage do they claim, usually?
Well, Randi’s usual test is : You must score 8 out of ten, or 4 out of 5 to pass.
Instead of that, have enough trials to make a valid statistical analysis possible. Say about 50 trials in a test. See if the subject can score *significantly *better than chance. Say, a 98% confidence level that it’s not fluke.
If he passes the first test, test him again to see if he can replicate it, or if it was a fluke.
Repeat the test several times, so that the results either revert to chance levels, or you can be 99.999% sure a paranormal effect has been displayed.
In short, Peter, what you are saying is that you think that Randi should conduct research to see if applicants have any paranormal ability, rather than conduct tests to see if applicants can do what they say they can do. Correct?