“Have Scientists Finally Discovered Evidence for Psychic Phenomena?!”

That’s from a description of early EEG experiments made by Alfred Loomis in the late 1930s. From the book Tuxedo Park by Jennet Conant.

Is it possible that Bem simply forgot about or didn’t adequately account for this effect? (I’m not plowing through a 62 page .pdf. but I hope somebody has.)

Can you explain what exactly you mean by a positive and a negative control, and what is the difference?

Just as an example, suppose there is a claim that food additive XY123 causes cancer. You wish to test this claim.

You run a test with 5000 lab rats, give them food with XY123 in it. You also have a control group of 5000 rats, who have exactly the same conditions, except that their food does not have XY123 in it. The lab assistants who give out the food are “blind” in the sense that they don’t know which group is which.

Would that control be a positive, or a negative one? And what is the opposite control that you would also need?

Well, our experiments are usually a bit more prosaic than that.

We are performing a test to see if a particular sequence of DNA in in our sample. If it is there, it shows up as a band on the gel. The test id sensitive to contamination and false positives, so the negative control is a test sample with all the reagents but no DNA. If the band appears here we know we had contamination and throw out the results.

However, if bands appear, how do we really know it is what we think it is? We run a positive control that is spiked with the DNA. If this doesn’t come up as positive then we know something is wrong, even if some of the samples come up positive as well.

But the band appeared at the expected size, right? I don’t really see the importance of the positive control either, when the experiment worked.

If you click the link to the follow up blog post at the bottom (PSI research: What do these numbers really mean) you get what I find a convincing put down of the research.

Because if the one sample (the positive control) that is supposed to come up positive doesn’t, we can’t trust the assay. If we have some negative samples, how are we to say that they are true negatives or failures?

In the larger sense, we want our assay to be consistent every time we run it. if the controls don’t work then the assay isn’t consistent.

This is how good science is done.

That’s it? He didn’t correct for multiple testing? How ridiculous.

Hmm. But the sample is positive, so it’s not an assay that produces only negatives.

I think the positive control guards against a very specific error: That the whole assay doesn’t work, and a different factor by chance happened to make the same band as you had expected. Depending on the setup, this doesn’t seem very likely.

Including the positive control, other errors are still possible. It could be something else than your expected factor that caused the band in your sample, while the positive control has it’s positive band. If you want to be more certain of the result, you could also test it a different way instead.

The samples can be positive or negative. Therefore, if the positive control is negative, we can’t trust any samples that test negative.

And we do test and confirm by multiple different methods.

Including that one.

There is so much wrong with this sentence it makes my brain hurt. Not that this discredits the research reported in the article, but it suggests that the author lacks familiarity with general science. This may help to explain why he believes that the correlations found in the experiment are comparable with the correlation between using condoms and not contracting HIV.

I postcognated the precognition of that paper’s publication. Postcognition is much more difficult than precognition. It requires the ability to postdict the prediction of something before it’s occurence. Those researchers have nothing until they explain the mystery of postcognitive prediction. Or maybe they just have nothing to start with.

She.

This article pisses me off. If you test for 6 things at the 5% confidence limit, most likely one of them will be nominally significant. If the test are independent, you should use

overall confidence limit = individual confidence limit / number of tests

As I understand it, the article just proceeds to ignore this, and talk about stupid psychic babble.