I doubt his math, but I don’t know the proper way to recalculate it myself. Intuitively it seems unlikely to me that a 11/50 run isn’t so flukey as to have less than a 1% chance of happening. Can someone else check the math?
How exactly did he adjust it down? How was the data manipulated?
Anyway, you agree that this test isn’t indicative of the actual power of dowsing, but is either a systemtic flaw in the test or simply a lucky run in a small sample of trials, right?
I make it 0.6% using straight-up binomial probability, assuming the odds of success ion a single trial are 0.1.
(50 choose 11) times 0.1[sup]11[/sup] times 0.9[sup]39[/sup]
Of course, those are just the odds of getting exactly 11 successes our of 50. If you add the (smaller) chance of 12, and the (still smaller) chance of 13, etc., it probably works out to a ~1% total chance of getting 11 or more.
By the way, if the odds are 1% that someone will have a exceptional result by chance, the odds of at least one person getting that result if you test 50 people are about 39.5%.
The first point you have to realise is that several people took the test. Two of them got results well above the chance level, the rest got what you’d expect.
So, he combined them all together, and stated that they got 22%. This hides the fact that two people got higher than chance. As far as I know, he has never published the individual scores.
But even the 22% figure is obviously above chance. So he combined the results with two other tests he had run. This was a different group of people, making different claims, tested under a different protocol for a different prize. The tests had nothing to do with each other. But when he added those results, the figure dropped to 13%
So, he tells the story of how he tested their claims, and they only scored 13%, and failed to get above chance.
That is exactly what I’ve been saying all along. You repeat my own thoughts, as if they were yours, and ask me if I agree with you.
No, I don’t agree with you, it’s you who agrees with me.
Well, yes, the odds of scoring exactly 11 are fairly low, I guess. But if you combine the odds of all of the outcomes that would’ve been above chance, that’s a much larger number. It’s not as if the target number of 11 proves anything - anything above a 5 would’ve been claimed a success by dowsing advocates, so it’s actually the combined chance of scoring above 5 which would constitute the odds of scoring better than chance. Or, at the very least, the combined odds of 11+
This is a fair criticism, but if he never published the individual scores, how do you know two scored significantly above chance?
No, not really. This entire time we’ve been trying to say that the test was not designed to give statistical significance to a small effect, otherwise it would’ve been designed with more trials. You instead scream “bad test! lying scientists!” not getting through that thick skull of yours that several of us have explained the sort of test you’d design if you wanted to try to measure a small effect.
Let’s say that the test had 500 trials instead of 50. Do you think the dowsers would’ve scored 110? If they had 5000 trials instead of 500, would the dowsers be right 1100? Why the difference?
You’re too adversarial and hard headed to understand that we’re not disagreeing on the basic premise, you’re just incorrect on exactly why the methodology didn’t show the results that are intuitive to you.
since we’re getting into personal insult territory here, plus the fact that we’re getting close to breaking Tom’s earlier instructions, I think it best if I don’t respond to you any further.
I just ran it out using Excel. Getting exactly 11 is 0.6%, getting exactly 12 drops to 0.2% and the decline continues to insigificance from there. The total of all probabilites 11 or more adds up to 0.9355%, so I support the original calculation.
Anyway, Peter may be right that some sloppy math occured. Probabaly best just to concede that trivial point, at let it go at that.
Based on his one trick pony behavior and failure to listen to moderator instructions, the staff has decided that Peter Morris is no longer allowed to discuss James Randi (including JREF and the JREF Challenge) on this board. As a corollary, other posters are not allowed to try to get Peter Morris to violate this ruling. So the kind of baiting we saw on page one is not allowed.
The subject of this thread is grave dowsing, and the tangents on water dowsing and related topics are over. Cites on grave dowsing are welcome, the other items should be discussed in a different thread.
Any violation of those rules (or of the rules on personal insults) will lead to the immediate closure of this thread.
One of the best overall reports on grave dowsing I’ve ever seen, from the Office of the State Archaeologist, University of Iowa(warning-pdf file). The writer, William E. Whittaker, Ph.D., RPA, not only checks out previous claims and attempts made, but also attempts dowsing to get a first-hand look at the subject.
If cadaver-sniffing dogs have any success rate at all, it puts them miles ahead of grave dowsers.
Edited to add: This article in Slate has claims that range from 22-38% to 60-69%.
Yes. I’ll make this clear ahead of time, though: ATMB is not Great Debates and we’re not going to do a pro and con on James Randi and JREF in that forum. You’re allowed to argue against our decision if you see fit.