Changes to Randi's Challenge

If you harbor the notion that science is about covering things up, and scientists desperately need to cling to their time-honored beliefs even in the face of new evidence, you have watched far too many movies. Hollywood has been traditionally anti-science.

The character that continues to insist that aliens do not exist even after he watches one eat his best friend for lunch is not representative of any real-world mindset. He is a fabrication from whole cloth.

In my proposed version of a fair Challenge I would stipulate that the Panel could not vet, scout, endorse, or really have any sort of personal or working relationship with the applicant to ensure impartiality and provide seperation for the Panel. Then, yes, with that distance I’m sure that most Scientists would be agreeable to perform as a Panelist.

Yea, some Academic might endorse them, but I’m betting they are few and far between.

Believe it or not, I’m not too much of a “paranormalist” or anything like that, and my knowledge of researchers in the field is quite limited. Off the top of my head I can think of two, but I wouldn’t presume to enter them into this conversation and susequently put them on Randi’s hitlist.

Again, presumptive charges of bias are pure nonsense. The test is agreed upon by everyone as a fair demonstration of the claimed ability with controls put on to prevent fraud or alternate natural explanations. It isn’t something that Randi or anyone judges to pass or not pass his own personal feelings. The panel of judgment, in fact, isn’t really anything more than a formality, because the goal of designing the test is to make the outcome, the “win” scenario for the paranormal demonstration a very clear and simple one. For instance, if you claim that there are auras that visually extend out from the body that are a special energy that certain people can detect, then these auras should be visible even when the actual body is JUST covered so that only the claimed area of the extended aura is visible. If auras really are energy extending from the body in a field that certain people can see, then once this is done, those people should be able to track the position of a person on the other side of a wall by watching their aura move.

There’s nothing ambiguous about that test. Either they can do better than chance at pinpointing the location of the person behind the wall, or they can’t. Randi or whomever is skeptical doesn’t enter their judgment into it. The test itself will make very clear and unambiguous whether the person can really see a real field of energy extending out from the body or not.

A thought for how the testing can be improved: to fit proper scientific standards, the test must have some form of peer review process.

The methodology of the test should be reviewed by one or several independent reviewers before publication.The results then must be published in an independent journal, no self-publication. The editor of that journal then should invite readers to review the article, and submit rebuttals.

Unnecessarily complicated. This is a Challenge, not a research project. It’s not an attempt to find out how something works or analyze a new force. It is only a “Sez you? I dare you – prove it!” concept.

Finding out how something works or analyzing a new force can be done once the Challenge has been passed. Spend all the research dollars you can get your hands on, and I’m sure they will be readily available, if you have something that works and you can prove it. First things first.

The only thing the test needs to have is mutual agreement on the procedures and safeguards to preclude cheating. That doesn’t take a lot of effort or resources, just the right kind of resources. It needs a test designed by someone who knows how people cheat and are misled.

Dowsing tests have been run using only plastic water jugs and paper bags to cover them. No electron microscope, CAT scanner or particle accelerator in sight. Cheap and effective.

Uh.

We already have peer reviewed journals, and they get on just fine, thanks.

This is a contest for a million dollars. For the last time: the tests are designed so that all parties agree upon beforehand what outcomes will constitute a pass or a fail so that it’s as unambiguous as possible.

Either the blindfolded girl can read a newspaper when the cracks in the blindfold is blocked out or she can’t. There doesn’t HAVE to be any review or complex discussion over whether she can or can’t.

But it still requires that the test be carried out properly, and the results correctly analysed. If a test is done wrongly, then it’s meaningless.

My suggestion.

The test is effectively the very test which peer-reviewed journal articles have to pass in the first place: that of statistical significance.

Most articles will not be published unless the results had a less than 1% probability of being arrived at by chance alone. (In some more rigorous subjects a 0.2% significance is required, ie a 1 in 500 chance of sheer lucky results). Now, the thing about scientific research is that other people check the results. And this is, crucially, how some published results have been falsified in the past: they simply didn’t continue to be replicated.

This is common sense really. After all, if a journal publishes 500 articles per year, chances are that the results in one of them will have arisen just by plain luck. But if more and more people replicate those results, what does that mean? It means that the threshold effectively gets ever smaller. Within a few months or years, you’re actually looking at results whose “luck probability” is actually less than one in several million.

So that’s effectively what the Randi test should be, and is. To get one-in-ten-million results only requires a day or so in a test involving 10 options with a reasonable turnaround time. And if, after an hour or two, you’re still hovering around only the 20% mark (ie. the probability of your results being pure luck is an enormous 1 in 5), you could bow out with dignity, whatever scientific paradigm you’re up against would remain unimpugned, and everyone would be satisfied that everything was above board.

Deal?

What is Randi’s hitlist? Who is or has been on Randi’s hitlist? What are the consequences of being n Randi’s hitlist?

At least in my field, the usual statistical standards are the .05 level (5%, or one chance in 20 that the results are due to chance) to be “significant” (and thus publishable) and the .01 level (1%, one in 100) to be “highly significant.”

So what are you even doing in this debate? You’ve admitted you don’t know much about academics, and you evidently don’t know much about research in the paranormal either. Your opinions seem to be highly uninformed.

Oh, please. If they actually are doing legitimate scientific research on the paranormal, they shouldn’t have anything to fear. And I’m pretty certain that your naming them in this thread isn’t going to make them candidates for Randi’s revised challenge if they aren’t already. Give us a frikkin’ break.

Put up or shut up. Name them.

I’m not sure that’s correct. My understanding is somewhat different.

Just suppose that someone claims that a particular food additive causes cancer. Some scientists test this hypothesis, give a bunch of rats feed containing the additive, give another bunch of rats feed without the additive, see how many get cancer.

At the end of the test, they analyse the results, and it turns out to be pretty much the same as chance. The test fails to show any statistically significant evidence to support the original claim. Surely this too is a result that would be published in a medical journal.

But the peer review process is still needed. Perhaps the additive really does cause cancer, after all, and the test failed only because it was badly done. Maybe there were flaws in the methodology. Maybe the statistical analysis is dodgy. Maybe the people conducting the test cheated to get the results they wanted. You can’t just accept the results uncritically. There has to be an independent review before it can be accepted.

There is no (significant) analysis required in a Randi challenge test. He’s adamant that anyone can see whether an applicant passed with no more analysis than simply counting. Yes, the test must be designed properly, of course.

There was quite a bit of flak given to them for calling a press conference to announce the results, before the publication of their results.

Yes, sorry, I should have specified: an article which suggests that the null hypothesis be overturned, not just any old article, review or letter to the editor. Our null hypothesis is that food doesn’t give us cancer (otherwise we’d all starve), and so an interesting article is one suggesting that, actually, this or that one does. A journal whose articles consisted of: Bread? Nope. Water? Nope. Cress? Nope? Next month: Beans, olives an more for just $19.95! would not survive in the ultra-competitive journal marketplace. Perhaps medical journals are a little out of the ordinary in that respect - null results aren’t commonly published in other scientific fields.

Of course, but if in this case someone genuinely can overturn the null hypothesis (be it “people can’t communicate with the the dead”, “the Standard Model forbids a lump of inert gold to project a field detectable by a pair of sticks” or whatever), then they need only carry out the simple 10-option tests with someone they do trust and they’ll become a millionaire with or without Randi’s money. It’s pretty straightforward to acheive 0.000001% significance in a few hours if you actually do have some ability. Heck, you can even get it wrong most of the time and still buck the odds so massively that both scientific journals and Randi himself would have to take notice.

Yes, I’m in physics so my summary is skewed towards more statistically rigorous journals. I guess that means that if your journal contains 20 papers, one of them will likely be pure bollocks!

Assuming that the test is conducted properly, yes.

If the test is conducted improperly, then no.

For example, most of Randi’s tests are done in a few minutes, not hours. They usually consist of just 5 or 10 trials. One mistake, or one lucky guess will cause a 20% swing in the results. A 0.000001% result cannot show up in Randi’s test.

That’s why all parties have to agree to it. The whole point of the methodology is that the test carefully isolates the exact claim being made in order to see if the distinct effect claimed can in fact be performed. There isn’t any special or complicated analysis required to judge the things I noted. If a girl can “read” without using her eyes, then she either can or she can’t.

This doesn’t affect my main point.

I’m in Ecology, and I agree with you 100%. :smiley:

Actually, since most papers contain more than one statistical test, the paper usually won’t be pure bollocks, but some of the particular tests will likely give a false result.

It doesn’t matter if all parties agree to it. It could still be wrong.

Independent peer review might spot flaws that Randi and the applicant missed.

Yes, they might. But James Randi is willing to bet a million dollars that they won’t.