Possibly, but the point is, he didn’t apply the same standards to the ones he accepted. Had he interviewed all of them, he might have found equally compelling reasons for rejecting them.
But he only looked for reasons to reject one, and not the other. He should have been consistent in the methodology.
Do you perhaps imagine that I believe in the starer effect? I don’t.
My point is, paranormal testing is plagued by sloppy methodology, and people who adjust the results to fit their beliefs. Both sides do it, if their result doesn’t match expectation. Both sides deserve criticism for doing this.
I simply listed the first test as an example of this.
**“Double standards in the application of criticism” **
he looks for reasons to dismiss people who say yes, but doesn’t do so for people who say no. “The tendency to discredit rather than investigate”
which is the entire point of his test. He starts with the assumption that it doesn’t work. He looks for data that meets his own expectations. He’s only interested in showing “it doesn’t work” rather than testing the claim. “Presenting insufficient evidence or proof”
Do you perhaps imagine that I believe in the starer effect? I don’t.
My point is, paranormal testing is plagued by sloppy methodology, and people who adjust the results to fit their beliefs. Both sides do it, if their result doesn’t match expectation. Both sides deserve criticism for doing this.
I simply listed the first test as an example of this.
And I am not ignoring the second test. If the second test were perfect, it would not change the problems in the first test.
And frankly, the second test has it’s own problems, as follows.
When testing any claim there are *at least *three possibilities to consider:
That it does not work at all
That it works through normal ways
that it works through as-yet unidentified ways.
The test design specifically ignores possibility 2 and only tests for 1 and 3. If we accept that it has disproved no 3, then it does not prove no 1. No 2 still remains a possibility.