Or other sports ones - shooting a basket from half-court, say.
I’m not sure I follow what it is that you are doing at slightly better than chance, here.
What is it that you are suggesting the people in this study are doing at slightly better than chance rates?
[quote]
Really? You think that a half-court shot by a human will hit the basket just slightly more frequently than one generated at random?
There’s a distinction between tasks that are really hard, and so are very likely to fail, and tasks that are done at only slightly better than chance.
So shooting a basket from half-court is hard to do, but the ability of a player to shoot a basket from half court isn’t “slightly better than chance”, because we’re not comparing the shot to a random shot. If we set up a basketball cannon at half-court that fires off random shots, the odds of hitting the basket are near zero. A human shooter performs a lot better than “chance” because there’s no control to compare the shooter to.
A better comparison would be something like picking stocks, where a skilled person could outperform the market, and we could look at whether some people can pick stocks at better than chance. However, if we have tens of thousands of people trying to pick stocks, we can’t declare that a 99% confidence level is good enough, and then declare that anyone who has a record of picking stocks better than chance at 99% confidence has some genuine ability. And this is because out of ten thousand applicants, we expect 100 of them to beat the market at that confidence level. So even picking at random, if we have a large number of tests we will see some people score high, and this is expected.
Sensing what stage of a woman’s menstrual cycle she is at.
The ability to score “slightly better than chance” is pretty unimpressive without pretty rigorous controls to eliminate bias. Like I said, the “pick one bucket out of ten” could have dozens of subtle cues that would allow some people to pick better than chance. If the test protocol is that the subject should guess correctly 90% of the time, subtle biases can be ignored. If the test protocol is that the subject should guess correctly 10.1% of the time, we need to spend loads of work tightening the protocol. This doesn’t just mean running lots and lots of tests, because if there is an unrecognized systematic source of error running more tests just allows the error to accumulate. And if you run 100 trials, we should expect to see one person to get results better than chance at a 99% confidence level.
There’s a well-known scam where the conman sends out a newsletter wherein he claims he can pick stocks, and will do so for free to his audience to prove he can do it. He sends out 1024 copies. In half, he predicts the stock will go up, in half he predicts it will go down. Then when the stock goes up or down, he sends out a newsletter to the 512 where his prediction matched reality, and does the same thing. Then 256 letters, then 128, then 64, then 32, then 16, then 8, then 4, then 2, then 1. Now he’s got a mark that is convinced he can pick stocks like a champ, after all, he could only get results like this one time in 1000! Now he gets the mark to invest a huge amount of money, based on the record. And then toddles off to the Cayman Islands with the cash.
This is why testing hundreds of people for psychic abilities, and declaring any results that score better than 99% confidence as genuine is ridiculous. At 99% confidence you expect dozens of positive results if you test thousands of people. This is simple statistics.
And yet … these types of tests are run all the time.
What you do is to replicate it. IOW, if you find a guy who performed at that level in one test, let’s see the same guy replicate it a couple of times.
And as it happens, the Randi test protocals specifically allow for them to demand replication, for just this reason.
[But this is mostly besides the point anyway. I’ve noted repeatedly that if there are practical problems with testing for paranormal phenomena, that doesn’t make such tests as you have run any more meaningful.]
There’s a pretty clear sensory explanation for this–humans have a sense of smell. If we want to postulate some ESP explanation, we’d have to eliminate the possibility of smell. And women frequently report that they act differently at different stages of their cycles, my wife gets headaches. So when she starts acting crabby, I can predict the stage of her menstrual cycle at better than chance. And of course, to do a double blind study you’d have to conceal the woman’s cycle from herself, so she doesn’t inadvertently convey information to the tester.
Again, it’s not enough to conduct lots of trials to get a higher confidence that the results are different than chance. The results could be different than chance due to some unrecognized systematic effect, and if they are conducting more trials doesn’t help. It just shows that the results differ from chance, and you don’t know why. Recognizing and eliminating these systematic errors is a key part of experimental design, and the less the effect differs from chance, the more important it is.
That’s irrelevant to the question I was answering (as is the rest of your post).
I didn’t see that data in the article. Tell me, how often did the subjects correctly guess at where in the menstrual cycle a woman was, and how often were they wrong?
They weren’t being asked to consciously guess it. They were tested as to whether they sensed it. This was measured by averages, in things like attractiveness ratings, testosterone levels, lap dance tips etc.
Not every time a guy thinks a girl is attractive is it an indication she’s at a high fertility point. But on average, there is a correlation. And so on.
It does make the tests meaningful. A dowser who can only detect water at a rate slightly better than chance is useless.
Drilling for water ain’t like going to Vegas, where if you put your chip on 17 and the wheel comes up 15, you lose the chip. In real life, in most places if you drill for water you’ll eventually get water, it’s just that sometimes you have to drill deeper and some wells have greater or lesser flow. So an effect slightly greater than chance is practically useless, because in the real world it just means you don’t have to drill quite so deep, or you get a slightly greater flow than if you’d have drilled a few meters to the left. And just looking at the ground you can predict which places are more likely to have water than other places–more vegetation equals more water.
And, without those rigorous controls, how the fuck does the dowser know he has this slightly-greater-than-chance ability? Think about how many coins you’ve flipped in your life, if you had the ability to guess right 52% of the time, how would you know?
How is it irrelevant?
My point is if someone can detect water in a bucket at a rate slightly better than chance, that could be for reasons other than dowsing.
Just like if someone who is blindfolded, yet is able to detect printed letters at a rate greater than chance, might have this ability due to some other effect other than ESP. Like, you know, peeking through the blindfold.
Hey, if you think there might be something to dowsing, you go right the fuck ahead and do the research yourself. Nobody is going to stand in your way.
You offered this as an example of something that people can do just better than chance. So, what is the rate of doing this correctly, and how much different from chance is it?
How do you know that they were “sensing” her ovulation, by the way?
Hi Hentor. It’s a non-supernatural equivalent to mentally causing a coin flip to come up heads 50.1% of the time.
You’d expect the bullet to hit the target a certain number of times out of 100,000 compared to some control set-ups. My ability to interfere with the path of the bullet using only a perpendicularly thrown softball produces results slightly different than the control set-ups.
You could quibble with the methodology (throwing a softball 100,000 times), but the concept is that a human can use skill to cause a small change in the results of an experiment.
A less time consuming example:
500 coins fall from random slots in the ceiling of a large room. They should fall to the floor 50% heads and 50% tails. I’m under a glass dome in the center of the room with my arms sticking out. If I can catch a falling penny I’ll put it on the floor heads up. It is, again, a non-supernatural equivalent to exercising a tiny amount of control over an otherwise random process.
To demonstrate a human doing something “slightly better than chance” there needs to be a set-up where pure chance would dictate one result, and that has a slightly different result when the human participates.
I don’t think there’s a voluntary human activity that is “slightly better than chance” in and of itself. I’m not sure what that could mean.
I’ve never seen that variation of the trick and would be interested in knowing the technique behind it, too. I’ve seen (and been involved in) tricks where a wad of money shoved into my hands turned out to be worthless paper (misdirection, switch), but not sure how a bent key could end up in somebody’s hand without them feeling it.
What I’m wondering about is what cues you are responding to. If you are trying to throw a softball when you hear the sound of the gun, wouldn’t you hit the bullet 0 times out of 100,000? If you are throwing the softball before the sound of the gun, you ought to be no better than chance.
Ooops, I guess I just quibbled with the methodology!
If we exclude the ones that you are not physically able to catch (i.e due to distance or speed), then you should perform significantly better than chance, no? For all pennies that you are able to catch and can evaluate whether they are heads or tails, nearly 100% will be placed heads up, no?
I agree with you, which is really my point in questioning Fotheringay-Phipps point.
As I wrote, the study wasn’t structured in that manner.
But according to that article, if a man guesses which women is in their fertile period based on how attractive he finds them, he will pick a higher percentage accurately than he would based on chance. Obviously, he would still be very far from perfect.
Again, I think we’re mixing up two things here. There is the claim that dowsing in general doesn’t exist. I’m sure we all agree that James Randi believes this, but I don’t think there is any general claim surrounding that MDC that the MDC itself is by itself definitive proof that dowsing doesn’t exist.
The purpose of the MDC is to debunk the claims of individuals who are claiming some kind of power, in the same way that Randi exposed Uri Geller and Peter Popoff.
Furthermore, my understanding is that the tests are set up so that the claimant gets a chance to do it his way first, which tends to show a high degree of success, and then repeat the test under strictly observed circumstances. This format tends to show that under non-strictly-observed circumstances, the claimant is either relying on subtle cues of some sort or is cheating.
As far as the fertility test, I’m sure that besides the test that there is some kind of theory about how the fact of fertility is coming through, whether through blushing, or pheromones, or smell, or behavioural hints, or whatever. The men are picking up some explainable sensory data triggered by fertility and their minds are unconsciously translating it as enhanced attractiveness.
And it seems to me slightly off to describe this as “an ability to detect fertility.” Because what the men are saying is “this woman is more attractive.” Not “this woman is fertile.” That’s not analogous to dowsing.
I don’t remember which book it was - I have a copy of “Flim-Flam!” and it’s not that one.
To whet your appetite, the keys are real, solid, spectator-supplied, ungimmicked keys. You probably can’t bend one with your fingers. The spectator has got those exact keys in their own hand - there’s no switching keys (they could sign the keys or something beforehand) and to quote Uri Geller “I don’t use magnets in my belt or lasers in my hair!!!”
I guess part of the fun is that people think there is no room for trickery - real keys, no stage apparatus, they hold the keys themselves, it’s all right in front of their face, etc.
In that video, Randi showed how, while he was talking to you, he could bend a key by jamming it against the edge of the table in front of him.