The mandatory "Million Dollar Challenge is a fraud" thread

I was actually thinking of multiple large sample-size tests in different geographical areas, age brackets and other variables, all of which are submitted to the final meta-analysis.

Yes, you’re quite right: that was a schoolboy error on my part.

However, if significance at 5% is all one needs to get a drug approved, I would actually be rather horrified (if a pharmacy holds 1000 different drugs, 50 are there on blind luck alone??). And one can certainly understand that if the JREF required only p=.05, then roughly every 20th applicant would get $1M!

This is why the $1M is only winnable by passing different stages. On the first day of testing, one could reasonably attain p=.05 in a single day even by perceiving very vague information paranormally in multiple tests with a 10% chance of success. Anybody who acheived this (and nobody has even acheived this yet) would go forwards to further tests until, by the end of several days’ or weeks’ testing, the probability of a fluke really would be down to one in a million or some similarly small probability. Even in that case, it would mean that if everyone in the US had a go, 240 people would become paranormal millionaires by luck alone.

So do we agree that 1 in 10[sup]6[/sup] is both sufficiently astronomical (while still representing a slight risk to JREF) to avoid chance, and eminently achievable over multiple stages of testing, to represent an acceptable mechanism for proving one’s paranormal ability?

It was quite related to the points you have made so far, Aeschines vague and sloppy as your points were – which is perhaps what confused you. Should you choose to ignore large sections of my post again, they will simply have to stand there mocking your lack of thesis.

But peer review is not exactly opinion, expert or otherwise. Peer review (notice the second word?) is the procedure by which academic work or research is objectively reviewed (not judged) by other researchers and experts before being accepted for publication or considered for funding. Peer review looks for problems and flaws, it does not judge to be “correct” or confer legitimacy beyond question. Typically, if a work passes peer review a publisher is somewhat encouraged that the work is likely to contain no major problems, and may thus be considered for publication (or funding). Need we bring up yet again Pons and Fleischmann and cold fusion?

Tedious non-argument. The Challenge is not under an obligation or even recommendation to undergo peer review. The challenge is run in a scientific manner and in accordance with the scientific method, yes, but the design of the experiments – agreed to by all parties and intended to produce a yes/no result-- renders peer review unnecessary. Remember, the testers and the testee work together to establish what will constitute sufficient proof before the challenge.

I don’t need to cite anything so far, unless of course you decide to step up the quality of your discussion. You said this:“it also follows from the nature of the Challenge that the odds against chance for successful completion (of the test as a whole) are at most 1 in 1,000,000.”

Where do you get that figure from? Doesn’t it seem wrong to generalize without even knowing the details of the challenges, or, for that matter, how many challenges there have been and what nature they were? I responded that “I would imagine the odds of success by chance vary with each challenge”, which seems (of course) rather sensible. Perhaps you disagree?

So, either explain where you get your claim from, or do me the favour of not making ridiculously misplaced demands for cites.

And remember, the chances of success by chance are extremely low, yes. That is as it should be. The chances of success by genuine ability should, by experimental design, be much closer to one since the claimant contributes to the design of each challenge. I suspect you are arguing that even with the direct cooperation and input of the claimant, his chances of legitimate success remain abysmal, but I still haven’t seen a coherent argument in favour of such a thesis, if that is indeed what you are attempting to argue.

The great Abe (I like that!) is wondering why you are avoiding all of his rebuttals in order to focus on a sentence that he wrote to summarize the Challenge and its purpose. Perhaps it is time for you to read the great Abe’s response more carefully and note that the great Abe did in fact quote several relevant parts of posts and did in fact address them in a systematic manner. Which is more than the great Abe can say for Aeschines.

Whoa: a vague and meaningless comment that directs me to a line of empty posturing? Is this still GD? You appear to favour self-assertive posturing in this discussion, which I guarantee will get you nowhere fast.

Is it too much to ask, after 6 or 7 pages of the usual paranormalist nonsense, for you to actually state your thesis in a straightforward manner instead of nitpicking minutiae? The dozens of objections aimed at your arguments in the last 3 pages should be of significant help in assisting you with that task. And please find something more original to respond with than “read the posts”, because I’ve already done that twice and all I got out of it so far is a poor opinion of your approach to this problem.

By the way, since you are so fiercely interested in the Randi Challenge and its gross unfairness, you do realize that JREF’s is not the sole instance of a cash prize for paranormal abilities? Are you familiar with the Zetetic challenge? For 15 years, until 2002, Zetetics Laboratories offerred a cash prize (it grew to 200,000 Euros) to anyone who could prove any paranormal phenomenon in front of the program organizers. Like the JREF prize and every other reward of this kind, the Zetetic money went unclaimed. One wonders why. It might be a really bizarre conspiracy among physicists, sceptics, and paranormal researchers (all of whom would love to discover something legitimately paranormal*), or it could just be that (gasp!) there is no psi to be tested.

  • = not just for the fame and recognition, but also for the intellectual pursuit, and, additionally, imagine the research grants and opportunities.

You said it wrong; good for you that you weren’t actually mistaken. Many who have argued similar points have been.

Being “closer to the odds predicted by chance” has nothing to do with being “qualitative.”

I slammed you for using “oodles” not because the word itself is goofy, but because your post was rife with errors. If the errors were in your phrasing and not your understanding, then good for you.

If by “commonsensical” you mean “clear and correct,” then, yes, you’ll be understood better.

I don’t think there will be that many bad drugs in the pharmacy–at any one time. The reason is that the marketplace (= massive testing on the public) will weed out the losers eventually.

Also, the safety and efficacy issues are separate. The chemical formula of a compound will usually tell you whether the pill will kill you flat dead when you take it, so drug makers aren’t going to market Cyano-Plus[sup]TM[/sup] on accident. Still, mistakes are made: As you know, some drugs interact with others (they can’t test them all), and some drugs end up causing very subtle problems: eye problems (one antipsychotic I studied required a split-lamp test at intervals because it could screw up your lenses or something like that) or things in the cardiogram that shouldn’t be there. The whole thing with Vioxx lately is another example; another drug weeded out. Keep in mind, however, that if a drug is useful enough, even if it kills some people it might be kept on the market with a ton of warnings attached to it. Some people don’t realize that even Thalidomide (one of many brand names; chemical name: (±)-N-(2,6-Dioxo-3-piperidyl)phthalimide) is still on the market as a Pregnancy Class X drug (along with Accutane and others): meaning, don’t you ever take it while pregnant, because it will do damage.

That’s just safety, the higher of the two hurdles. Efficacy is even trickier. There are certain classes of drugs that just don’t work all that well. Nasal and occular steroids for allergies and swelling come to mind. For most people, they just aren’t going to blow your allergies out of the water by themselves. But if they seem to do better than a placebo, then they get the go. I don’t know what kinds of p scores would go along with that, but I highly doubt we’re talking p =0.0001 and that kind of thing, since allergy drugs often get slammed later on for being “barely better than a placebo” (but that could also be the magnitude of the differential effect, not the presence/absence of the effect).

I need a cite on that: What tests has Randi et al. run in the first stage that allowed for 1/20 odds of a chance success?

Let’s not imagine what Randi would do; let’s cite what he has in fact done. If you’re saying, “Nobody has even acheived this,” then you’re implying that people in fact have been offered tests they could have passed through the power of chance; that’s misleading.

It’s certainly acceptable for certain tasks. Those in which judging are required, and those that aren’t easily quantifiable won’t get a chance.

Further, it’s not just that Randi demands astronomical odds–he also demands such odds within the first stage: 10 out of 10 with 1 in 10 billion odds. Since pyschics, etc., don’t claim such accute accuracy to begin with, the requirement is unreasonable.

Well, this is just it: actual tests he has run to date are based on what the applicant has agreed. Clearly, it is quicker and clearer for all concerned if the applicant agrees to as tiny a p-threshold as one can acheive in as short a time as possible: if a dowser claims he could get 10 out of 10, why waste an entire day or week looking for some tiny effect just peeping above the statistical noise? What I am asking you is would such a statistical basis be acceptable in your opinion? The $1M Challenge is based on stages, and nobody has to date passed the first stage even though they agreed what that first stage would be in advance.

Given what psychics and remote viewers say they can see/hear/feel, tests can quantifiably be cast in terms of 10 different options like in post #4 of this thread, agreed?

Incidentally, would you not say that releasing a drug does in fact eventually give rise to statistical significance having a p-value as tiny as that required to win the $1M? I guess we’re just quibbling over the semantics of the sentence “winning the JREF challenge requires evidence far stronger even than scientific medical trials”, which I would suggest is true only under rather misleading interpretations.

Sorry, Abe, but I don’t fancy your method: Restating your opponent’s views in such a deliciously twisted and distored fashion, that they must spend much time untwisting, while their original points sit there, ignored. If I want to take my points and positions to the funhouse, there I will go; I have no desire to rent your set of mirrors.

Yes, peer review means review (= analysis, comment, etc.) by one’s peers (= fellow experts in the field). It means subjecting one’s work to the scrutiny of experts. I should think this would be obvious; it’s practically a priori true.

Who said it did? It’s still a type of judging.

Messy. I don’t think you understand what “peer review” really means. Here’s a Wikipedia article to get you started.

To quote the above article,

Notice the word award. I’m sure there’s an Award for Excellence in Creationist Research out there, but, as it is not peer reviewed (i.e., judged by scientists respected by other real scientists), it’s not respected by other real scientists (here the social system comes into play).

Hey, you’re the one who brought this up, not I, but it’s a nice cache of ammo. Why isn’t the Challenge run under the scrutiny and advice of respected scientists? Why is one of the main, or the main, functionary involved, Kramer, a person that even the skeptics in this thread have recognized as a statistical dunce?

Thank you, Abe, you raise a good point.

So, if dunces on both sides agree, then it’s good science? Maybe such a process could be called “fool review.”

No joke. We’ve been over this 10[sup]10[/sup] times by now.

Oh, don’t worry, you’re doing fine with cites.

It’s pretty damn simple. Since $1M is at stake, offering people anything easier than 1/10[sup]6[/sup] odds means that Randi stands to lose at least a dollar. In principle, Randi shouldn’t offer anyone easier odds if he wishes to keep the contest going.

The above is not a “generalization”; it’s speaking to the math that underlies the Challenge. But the odds offered to one dowser were, in fact, 1 in 10 billion.

As for what other odds apply in the various challenges, I’d love to see a database. At the same time, if you wish to go through the rather skimpy resources available on Randi’s website and see what other odds have been in play, that’s fine. The principle still stands that Randi would be a fool to offer anyone anything easier than astronomically high odds.

I don’t. But they will be (or should be) astronomically high in every case, per the reasons cited above.

“Contribute” is a funny word in this context. I think Peter has covered this angle, and I don’t wish to get into it.

I’ve covered this several times in previous posts. Here it is again: The level of accute accuracy (i.e., 1 in 10 billion odds against chance) that Randi requires are not what pyschics claim to be able to do in the first place. There are some (in principle) who, daunted by odds like 1 in 10 billion, will never attempt the challenge at all. There are some (in principle) who will apply for the challenge, request lower odds, and be refused. Randi would have to refuse them, since even with odds of 1/10,000, he stands to lose $100. And then there are the wackjobs who actually take the challenge, accept those high odds, and lose.

Insofar as I found anything you wrote to be at all pertinent, I responded.

I think you’ve already won that race.

My first post in the thread lays it all out nice ‘n’ neat. You responded to it, not very well I think, and I responded to you. I suppose we’re finished.

There are more prizes like this? This is interesting information.

Baldercrap. Psi has been lab-tested for nearly a hundred and fifty years, with increasingly stringent controls (i.e., skeptic reviewed and monitored). Now, all those experiments may, at the end of the day, not prove psi exists; I’m not going to argue the point. But if psi is to be proved at all, that’s how it’s going to happen. Your implication that Randi’s Challenge and “reward[s] of this kind” are the standard by which to judge the status of psi research indicates you are either ignorant or dishonest.

As you are well aware, Randi doesn’t accept research after the fact, no matter how good the controls are. He tests individuals, and he tests them under his own principles.

[QUOTE=SentientMeat]
Well, this is just it: actual tests he has run to date are based on what the applicant has agreed.

[quote]

Right, and those that don’t agree, don’t get tested. It’s pretty much Randi’s way or the highway. But Peter has covered that; it’s not my argument.

[quote]
Clearly, it is quicker and clearer for all concerned if the applicant agrees to as tiny a p-threshold as one can acheive in as short a time as possible: if a dowser claims he could get 10 out of 10, why waste an entire day or week looking for some tiny effect just peeping above the statistical noise?

It’s a subjective matter as to what odds against chance would “really prove” that someone has significant psi power. Randi might genuinely feel that unless someone can perform a task in which 1 in 10 billion odds apply, no psi has been demonstrated. I might believe that 1 in 10 thousand odds are good enough. But if I aver the latter (hypothetically), then there is still the problem that Randi is going to lose $100 on that bet.

The fact that there are stages really doesn’t mean much as far as the statistics are concerned. I think it’s a good idea so that, on the second stage, Randi can anticipate cheating based on what he saw in the first.

No, it doesn’t work that way. The scoring of the tests always has a subjective part to it (what is a hit, what isn’t?), so that no matter how well someone does, the skeptics always claim that there is a design flaw. And they can, since it’s a matter of opinion to some degree.

Aeschines, if I had a loaded die that gave me 6s 40% of the time, I’d feel pretty confident putting it through any stastical test with any desired degree of accuracy to show that its results are unexpected for significantly less than one million dollars. The more trials, the better I do. The smaller the uncertainty, the better off I am.

Psychics are, in effect, claiming that they have such a loaded die, that their results are non-random. A larger number of trials and thus a smaller stastical variance would, if this statement were true, tend to confirm this, not destroy it. The more tests I do with my hypothetical loaded die, the smaller my variance is. The more tests I do with my loaded die, the closer to 40% results I can expect.

I’m not claiming that the tests are always fair (I’ve not read up on every single one). I certainly wouldn’t claim that Randi and Kramer are are always level-headed and cool-tempered. Morever, I’ve repeatedly said that the Challenge doesn’t and cannot by itself be used to justify dismissing every strange claim that could win the prize.

I will, however, dispute the fact that the smaller chance of the prize being won by statistical randomness invalidates the Challenge. If my loaded die were eligible, it could be tested to the point where the odds of a regular die getting 6s 40% of the time were below any imaginable stastical threshold. Morever, the larger the number of trials, the better chance I have of showing the non-randomness of my die.

Lowering experimental stastical variance (noise) is always a good thing for an experiment. Less variance does not throw the results into the pit of randomness and doubt; rather, it does the opposite.

If I gather 20,000 people in a stadium start flipping a coin, and ask everyone to call that flip with those that guess wrong sitting down after each, after 13 flips, I can expect somewhere around 1-2 people (possibly none, with more than 3 being stastically unlikely) still standing up at the end. If I accept 1 in 10,000 odds as being evidence of paranormal abilities, I would have to conclude that I cannot tell the difference between the gentleman left standing at the end and someone with genuine psychic powers.

Are you comfortable with such an assumption? What if I gather a million people and ask them to call 20 flips? (1,048,576:1). What if I somehow had the attention of everyone in the world and asked for a call for 32 flips (4,294,967,296:1)?

If I’m comfortable with less than astronomical odds against chance being proof of a power to predict coin flips, then I might as well just assume that psychic powers exist, since odds are that someone will perform as well as I have described in each of the situations described above.

Yeah. As I said in a post or two above, that kind of testing has been done in the lab for a long time. Experimenters claim they have gotten meaningful results; skeptics disagree.

Randi doesn’t do that kind of testing (i.e., long series of trials to verify a significant effect that is not mind-blowingly massive). He’s not equipped to do it, he just doesn’t.

All true.

But Randi doesn’t do that kind of testing. Ergo…

You know, for someone who has said something along the lines of “You’re showing your ignorance” a number of times in this thread, you’re not doing a good job of hiding your ignorance.

Here are some chemical formulas. Pick out the ones that will “kill you flat dead”:
C[sub]6[/sub]H[sub]8[/sub]O[sub]6[/sub]
C[sub]21[/sub]H[sub]22[/sub]N[sub]2[/sub]O[sub]2[/sub]
C[sub]9[/sub]H[sub]8[/sub]O[sub]4[/sub]
C[sub]8[/sub]H[sub]10[/sub]N[sub]4[/sub]O[sub]2[/sub]
C[sub]18[/sub]H[sub]21[/sub]NO[sub]3[/sub]

Just tweaking your nose, Aes. Take a step back and realize that not everyone here is an idiot.

Trained drug chemists will be able to tell which chemicals are prima facie killers; I didn’t say that I knew. Your chemical quiz is pretty easy to beat with Google, but I’ll just admit ignorance instead.

You know, bro, this is how it is. The skeptics here label you a “believer” and then assume because they have so labeled you that you know nothing whatsoever about mathematics and empirical science. Then, when you are actually correct, they work under the assumption that you must be wrong and then jeer you until you finally shove the truth down their throats.

In this never-ending battle, it’s hardball or nothing.

No, the example is ludicrous. Your test practically requires a winner (yes, when few people remain, you could all of them sitting down.) Hence, the more people you have, the more arbitrarily lengthy you could make the winning serious of guesses/predictions.

You might as well claim that the Lotto creates a psychic every time there is a drawing that produces a winner.

We have another entry for the “Bad Statistics” column. But it seems that you have taken my point that the odds at which someone is deemed a success are very much a matter of opinion.

Nope. I am a trained chemist (although, admittedly, I’ve only dabbled in drug discovery), but we’re talking basic chemistry here. A chemical formula tells you nothing about a chemical’s structure, which is inherently linked to its reactivity. For example, the third chemical formula on my list was that of aspirin, but if you look here, you’ll see that there are at least 12 other structures (I say at least because this list is nowhere near comprehensive). Even if a chemist is given a structure, they can very seldom offer more insight than “That methylenedioxy group, such as in Structures 7 and 8, isn’t very stable.” Then again, maybe you need that reactivity. I mean, look at the structure of Viagra . As a chemist, I know that chemicals with a whole bunch of nitrogen atoms are often explosive. But there it is, loaded with six of em – two of them are even singly bonded to each other, which is notoriously unstable.

And we haven’t even gotten into stereochemistry, which touches on what you said earlier about Thalidomide.

I’ve hijacked this thread enough. The point is that if what you said is true, then the entire field of “drug discovery” would not be the booming enterprise it is today (3.26 million Google hits for this phrase).

That’s not really what I’ve observed. I’ve seen a lot of people (not just you, obviously) talking past each other, being patently offensive, being obtuse and then refusing to clarify, being ignorant… in other words, just like most GD threads :slight_smile:

With energetic posturing like that, you need a good mirror.

Peer review is just not judging. A judge rules on a matter, declares or determines it based on his opinion or estimation. The Challenge doesn’t employ judges because it does not require a ruling, there being in its stead an agreed test framework, and an agreed range of results that determines whether the exercise is a hit or a miss. Additionally, the judges, if there are any, are the organizers of each particular Challenge, being Randi/JREF and any sceptics or experts collaborating on the test.

Thanks for the link to the Wikipedia article on peer review, and I note that it is precisely what I was talking about. Look:

There’s no judging involved in the process, it is a set methodological analyses. Peer review combs through a work in order to look for flaws, something I already mentioned a few times.

This doesn’t mean the Challenge is not respected by “other real scientists”, but, as the article indicates, it might be “regarded with suspicion” under certain circumstances. You don’t need peer review to perform a predetermined test and calculation, because the work is very straightforward; the claimant has an agreed target to hit, and he either does or doesn’t. There’s little to debate; if you want to argue that the guidelines of the Challenge are necessarily unfair to claimants with genuine paranormal abilities, and that is why no one has won the million, please go ahead and do that, but I would expect to see some systematic evidence for this.

Partially for the reasons already given in previous posts and repeated immediately above. For the rest, I don’t really know for certain and probably only JREF can settle the matter, but had you actually read your article on peer review, you would have seen the following:

No idea. But I’ve seen only a couple comments about Kramer’s statistics in this thread, and I haven’t seen the material being criticized, nor whether (if there was indeed an error) it is part of a trend or an isolated incident.

Maybe, except that there’s no review, because the results ought, by design, to speak for themselves. Sure, both sides could agree to a flawed experiment knowingly or unknowingly, that’s a possibility (except that Randi does obtain professional advice when designing experiments, not sure if he does so regularly though). What’s more important though is that Randi is himself the expert here. He’s a sceptic and a professional magician, so he knows how to minimize to an absolute minimum the two factors that account for why psi remains such an issue for some people: trickery and chance.

All that remains after that is to test for an effect. It’s not necessarily worthy of peer review.

Why does this have any bearing on the discussion? The Challenge tests for paranormal effects or abilities, and only rarely takes place in Vegas.

Help me out here, the odds to do what?

I wonder then, how you can on the basis of one datum honestly generalize about the Challenge’s entire body of work.

Why?

Why is that? I still don’t see any reasons. This isn’t about JREF’s bank account balance or the odds of winning, this is about testing claims. Who wins or loses is irrelevant, at least in principle.

And yet they agree to take the test under such circumstances. I also cry “cite” on your depiction of the level of acute accuracy required. In the few tests I have seen, the claimant was asked to perform consistently better than chance, not an a superhuman level of infallibility. Here, by the way, is the framework of “mutual agreement”:

If a claimant feels the required level of accuracy is too stringent (and we have seen no evidence that this happens systematically) then he doesn’t have to go through with it, or can try his luck by requesting a modification to the design of the trial.

Maybe. This is not, however, of great significance. Nor is that 1 in 10 billion figure one I accept as a standard of the Challenge. Please demonstrate that it is.

It sure would be easy to win the challenge if you could define your own guidelines to your heart’s content. But that wouldn’t be a test.

Whether Randi loses or not is irrelevant, and the gambler’s motivation you ascribe to him appears to be conjured out of thin air. For all we know Randi genuinely wants someone to win the Challenge, as he has stated numerous times over the decades.

Demonstrate that those odds are unreasonable please. Perhaps they have been in one or a few tests, but that is a far cry from saying that the tests are routinely unfair (especially since there hasn’t been that serious a dearth of people still willing to take the challenge after these supposedly excessive guidelines were established).

Not that I can see. For example, you didn’t meaningfully address my objections to your original assertions in this thread (the A,B,C list), which I wrote in an attempt to clarify your beef with the Challenge.

You spat out a total of 7 (!) lines that consisted mostly of assertive posturing and virtually no arguments. Glance back and have a closer look (posts 258 and 259 – let me know if you still think those few lines of assertiveness you posted were in any way a serious reply).

Yes, Randi isn’t anywhere near the only one doing this, although his prize is by far the most substantial. Here is a list of several such “sceptic awards”:
http://www.aske.clara.co.uk/AwardList.html

The above page has tallied an aggregate prize for anyone demonstrating paranormal abilities of up to $ 2,335,000 (and I think there are more awards not listed). It’s easy to pick on JREF because that prize, being the largest, attracts a lot of greedy people and a lot of negative publicity when claimants fail to demonstrate anything.

Sigh… you really didn’t read my posts in any detail, did you? I find it particularly amusing that you should respond with an either-or fallacy in an attempt to insult me. I already addressed these points before you addressed my jocular comment, read up on this:

Being clairvoyant helps my sceptical outlook.

Yes, mutually agreed principles. And why should Randi have to accept research after the fact? The Challenge is very simple: demonstrate paranormal abilities under these conditions at this time. If you don’t you fail. There is nothing else to it.

Yep, you’re right. I meant the overall chemical structure (which the drug chemists would understand, and which they put a little picture of in the PDF) and the understanding thereof, and not just the chemical formula, would prevent researchers from testing cyanide on healthy volunteers. But I wrote it wrong and got the wrist-slap I deserved.

Yeah. The funny thing is, if you look at “real” scientific boards with “real” scientists “discussing” and “debating,” you see a lot of the same things. It really tells you something about the limits of human reason. My personal policy is never to say anything I don’t mean (i.e., rhetorical zingers that are not accurate) and to play as fair as possible. Yet I am sure, as I am human, that I make many errors in that area.

Not so ludicrous as you’d think. I’ve heard that similar methods have been used by some paranormal researchers to quickly select people with possible psychic powers from other volunteer subjects. The researcher stands in front of the room, asks people to call 5-6 flips, those that remain standing are possible test subjects. I’ll attempt to find a cite for this when I’m on a decent computer again (tonight or tomorrow).

I don’t know how many people the JREF has helped to test, and I wouldn’t even try to guess how many people they are going to test during their lifetime as a foundation. But we can both agree, for instance, that if they last for the next 200 yars (unlikely) and test, say, 200,000 people (also unlikely) that a test with a success rate of, say, 1 in 100,000 could concievably produce false positives.

Yes, it’s absolutely a matter of opinion, in the same way that the X Prize declared a certain height above the earth’s surface to be the goal for their prize. I might (hypothetically) feel that such a low-orbital flight as was made shouldn’t qualify for what is intended to be a prize for getting people out into space, but that’s really not my decision to make. But, then, I didn’t claim before the prize that consumer space travel was impractical, and after the prize, I’m not claiming that consumer space travel is practical. Both of those remain to be seen.

The JREF Challenge is what it is and no more. It’s not a litmus test that all things paranormal must pass before being accepted into a scietnific community. However, against the charlatans out there that steal people’s hard-earned money, promise the moon, and deliver nothing but platitudes, it’s a useful barb in a skeptical arsenal.

The last two times we discussed the paranormal, I spent entire days addressing every single argument and study you and SnakeSpirit threw at me. I was attempting to show why the above presentation of paranormal inquiry --one often cited by people who appear to have a problem with the sceptical method-- is deliberately misleading. Heck, you even grudgingly conceded the point back then, what’s happened since?

Experimenters (and several other people) claim lots of things. No paranormal claim I am aware of in any scientific circle has thus far endured to a meaningful degree the rigorous erosion of peer review, replicability, and time.

I have no problem labeling such a method BS, at least at first glance. Even though a (hypothetical) good skeptic might do better than average in such an environment, the chance that a good skeptic is among the people being tested would seem to be so low as to make the attempt worthless. Then there’s the simple fact that even a good skeptic could be knocked out of the running at any time, simply because s/he got her one wrong answer earlier in the game.

That’s right, it certainly could.

Correct. Hence, my problem is not so much with the Challenge itself but with the conclusions people tend to make about it and its results.

And one that is misused in nearly every thread it can be here on SDMB.