Bible as proof? you kid right…
The Bible to some is a proof that we should handle serpents,especially poisonous ones ,in church. It is subject to interpretation and therefore can not represent truth. One sect to another argue that they are the way and the bible proves it. It actually proves the bible is proof of nothing.
And one that I’m surprised you find acceptable. Especially since it’s useless. If Newton had resolved doubt about gravity, there’d have been no need for Einstein’s work. If science could prove a thing is true, then once you’re done with your experimenting, the issue would be resolved. Scientific inquiry and experiments continue precisely because nothing is proved true by them.
That argument is beneath you. The state of the coin, if it is unfair, will modify the assumptions (axioms) of the analysis, yielding a different result. Plus, the point remains that only analytic argument can tell you about a standard truth. In fact, exactly what you do when your experiments are finished is formulate an analytic argument based on what you’ve observed.
As I said before, scientists in the old days knew this. Einstein’s Relativity was proved the moment he drew his conclusion, because he formulated it deductively. The only thing science could do was test whether his conclusion was false. (But not whether it was true. It was true, is true, and will always be true, scientific inquiry notwithstanding.)
Doubt is never resolved permanently, of course.
The argument would be beneath me, only you’re not understanding it. The point I was making is that sometimes an analytic argument in the real world fails, because it leads you to miss obvious truths that are obscured by the underlying axioms.
Daniel
Check your own posts. Nobody presented the Bible as proof of anything.
You used the word evidence in the post responded to, not proof. Perhaps if you responded to points actually being made it would save some time.
What, you think that, within fundamentalist Christianity, the Bible is not proof? you kid right…
I’m not claiming athiesm is science.
What I’m attacking is the fallacy of selectively defining those inspired by what they understand as science as somehow fundamentally different from those inspired by what they understand as their religion.
Take Christianity and the Spanish Inquisition. There is no doubt that the Spanish Inquisition was inspired by religion - that’s what those running the Inquisition themselves claimed! Now, I’m not a Christian, but it seems to me that if I was, I could make a darned good case that the Spanish Inquisition was a perversion of everything Jesus ever stood for.
Take Communism. It stands in the exact same relationship to “science” as the Spanish Inquisition does to “Christianity”. Those who advocated Communism clearly claimed it was wholly scientific, and indeed “Marxist theory” is still viewed as a “scientific” way of analysing human history by many.
Now, you are welcome to argue that Communism isn’t real science. I’d even agree. But by the same token, the Spanish Inquisition isn’t real Christianity, either!
Same goes for a whole host of “science inspired” evils - Eugenics, Scientific Racism, the lot.
It strikes me that those who advocate science as wholly good and religion as wholly bad have a “No True Scotsman” way of looking at the issue - they simply define the “evil acts” as, by definition, the work of those perverting science. Yet the exact same can be said of “evil acts” pursued in the name of religion.
Science permanently resolves doubt with respect to what is false. For example, there can never be any doubt that pre-Copernican models of the solar system are false.
Maybe we’re not understanding one another.
I have no problem if you use science to decide what your axioms are. All axioms are based on observation and experience. But axioms don’t prove anything. “Axiomatic” and “true” are not synonyms. An analytic argument that fails in the real world is called “unsound”. But when its axioms are true and its formulation is valid, its truth is undeniable. The real world will always jive with a sound argument. If ever it does not, then you need to figure out what went wrong with your observation. Optical illusions are one example. Your senses can fool you, but logic can explain to you why you were fooled.
Not even that. It is conceivable in some circumstances that the experiment falsifying a theory or hypothesis wasn’t done correctly, and that another attempt might succeed. For certain things, like pre-Copernican models of the solar system, we’re very close to absolute falsification. But there are plenty of cases where the level of evidence for falsification is not nearly so good.
I just took a little refresher statistics class, so I’m up on this stuff. One problem here is that you are using probability, which forecasts, instead of statistics, which looks at the results of an experiment.
What you really need to do here is to have a null hypothesis that the coin is fair, and, along with that, an assumed distribution of heads and tails. Given that, you can find the probability that a fair coin would yield 1,000 heads. (The software I have to do this is at work.) Getting 500 heads and 500 tails would yield a very high (but not 1) probability the coin was fair, as would 499 heads and 501 tails. The probability of the null hypothesis being true tails off, and there is no absolute place where you can “prove” anything about it.
As for alternating heads and tails, the hypothesis says nothing about order. You can create models of coin flips that will tell you what an expected ordering is from a fair coin, and give you the probability that this comes from a fair coin, but that is a lot more complicated than just measuring the count of heads and tails.
Hey there. We butted heads on this before, where I somewhat explained my problems with null hypothesis significance testing, but I’m always up for more butting of heads. I’m not familiar with the particular distinction you are drawing between probability and statistics, but in the following, you seem to refer to probability often enough.
What do you mean by “an assumed distribution of heads and tails”? Do you just mean a probability distribution, where each flip is independent and equally likely to go heads or tails?
Assuming you mean the probability of the first 1000 flips being heads, given a fair coin, isn’t this just 1/2^1000?
As I explained in the previous thread, null hypothesis significance testing cannot, in itself, give us any information about the probability of the null hypothesis; in particular, p-values are not the same thing as the probability of the null hypothesis.
Well, I still don’t see how you can determine anything about the probability of the coin being fair from the observed data without some implicit assumed a priori probability for the coin being fair, some implicit assumed probability distribution for an unfair coin giving various outputs, etc. Suppose I flip a coin and it comes up heads 50 times. What’s the probability the coin was fair? Suppose I flip a coin and it comes up heads and tails in alternation 50 times. What’s the probability the coin was fair? Supposing I flip a coin and it comes up HTHHHHTHTTTHTH…TH, of length 50. What’s the probability the coin was fair? If you say it depends on what we take to be the null hypothesis, are you saying that the probability of the coin being fair depends on the words I muttered to myself before the observed data?
Perhaps I shouldn’t use the word “implicit” here, since my whole point is that one needs to be explicit about these in order for one’s reasoning to be seen as mathematically sound. Though I don’t think you are even thinking about these at all.
In that thread, I gave a link to Jacob Cohen’s article on the the problems and misapplications of null hypothesis significance testing, but that link no longer works. This link does, though. The most important thing to pull from that is what’s mentioned in the opening abstract, the need to avoid “the near universal misinterpretation of p as the probability that H0 is false, the misinterpretation that its complement is the probability of successful replication, and the mistaken assumption that if one rejects H0, one thereby affirms the theory that led to the test”.
I think all this talk about the statistical models is kinda missing the point. Imagine a real-world setting in which someone offers you tasty odds–say, 2:1–if you bet that the next time they flip a coin, it’ll come up tails. You take the bet, and it comes up heads. So you look at the coin, and it sure looks fair to you. You take the bet again, and it comes up heads again. And again. And again. And again. And again.
At what point do you stop taking the bet, despite every examination of the coin indicating that it’s a fair coin?
Personally, I don’t take the bet the first time: anyone that offers me those odds is a Shady Character. But if we set that aside, there still reaches a point where it’s insane to continue taking the bet, because despite what the mathematical model tells you, it’s obvious something’s screwy, and I can guarantee you that coin is going to come up heads the next time you take the bet, 50/50 odds be damned.
I remember some saying from a college lecture along the lines of, “When theory and data fight, data wins.” That kinda gets at what I’m saying.
Daniel
I agree with you about what to do regarding the bet, because if I were to be explicit about my prior probability distributions regarding coins in the real world, I assign a higher probability to a coin coming up heads 1001 times than to it coming up heads 1000 times followed by a tail, and that sort of thing. I’m just saying that one should understand that one is making this assumption.
Incidentally, I wouldn’t call this a case of theory losing to data; I’d call this a case of having to understand that theory only works when it’s the right theory; i.e., the theory whose premises match the data. If you take as a premise of your theory that the coin is fair, no wonder it will give you wrong results when up against an unfair coin. But if you just take as a premise of your theory that the coin follows whatever inductive laws, then you can apply your theory perfectly well in whatever inductive-law-following universe. (Of course, there remains the assumption that we live in an inductive-law-following universe, but most of us are happy to grant that, I suppose).
Indeed–what I’m suggesting is that your previous theory was that the coin was fair (based on whatever observations led you to that), but that when you encounter data contradicting that theory, it’s the theory, not the data, that has to change. The data wins.
Daniel
Fair enough. The probability I’m referring to is that of the results being due to chance - in other words the probability that the null hypothesis is correct, which is something we do not know for certain. It is very different from the number of heads and tails we get.
In this case, that is the distribution we are all assuming. In other cases, the distribution may be different. There are ways of testing to see which distribution fits the data the best. If the null hypothesis was that the coin was fixed, the distribution might be very different, for example.
Now that usage of probability is okay. Before you toss, the probability of getting this result is 1/2^100. However, the probability that the coin is fair given that result might be different. I’d have to type the results into this program to be sure.
And I agree totally. The p value merely gives the probability that the results obtained can be explained by the null hypothesis, and says nothing about the probability of the null hypothesis itself.
The first thing we need to do is to define fair. The usual description of fair, given the difficulty of making a coin take on a given pattern, is a distribution of heads and tails around 50% of each - not that we expect to get 50% of each except after an infinite number of trials (and probably not even then.) Your result is just as probable as any other result, and the chance of the coin meeting the null hypothesis in this experiment is very good.
Now, if you got this result, and you were suspicious, you could define another experiment, where the null hypothesis was that the coin was crooked. You can define the probability of getting the exact same result (1/2^1000) or define some sequence of results. I mentioned in my last post how you go about this - to my logic designer brain, it is a lot like defining a state machine.
One big sin in interpreting experimental results is to latch onto a result you weren’t testing for and giving it as the conclusion. For instance, say you were giving 100 people ESP tests. No one got much better than chance, given the number of people, but someone got them all wrong. You can’t say ESP has now been proven. You have to retest that person, at least, and see if it will happen more than once.
So, the coin seems to be fair by the definition of fair you used before you started, but you don’t have to stop there, and can retest it for fairness under another definition.
This shows the importance of formally defining what the theory says. If the theory says the fair coin will give results that match an even distribution of heads and tails, with given variance, the results don’t contradict the theory at all. You’re working on an informal definition of fair as not surprising. I think Indistinguishable in the last thread correctly said that every result was just as surprising. Yes, this result would be surprising, but if a retest gave a totally different result, would you still consider the coin as not being fair? You don’t have enough information!
If you wanted to retest, you can find how many tosses you’d need to confirm the null hypothesis to a given level of confidence.
Agreed–the problem, of course, is that in the real world, you’ll almost never do this adequately on the first go-round. And there’s no need to: for the most part, quick-and-dirty theoretical phrasing is good enough for real-world situations. When it’s not, then you rephrase the theory.
Daniel
That’s not true, though. When I flip a coin 10 times, I’m equally likely to get each of these two results:
HTHHTHHTHT
HHHHHHHHHH
Anyone older than five is going to be unsurprised at the first result and astonished (and probably suspicious) at the second result. Do you agree or disagree?
What makes the second result remarkable isn’t that it’s more unlikely than the first: it’s that it’s one of very few results that is meaningful to our human eyes. What’s unlikely is that a meaningful result would come up.
In other words, don’t compare HHHH… to HTHHT… : compare the meaningful results (all heads, all tails, and maybe a half-dozen others such as 5 heads followed by 5 tails) to all the other results. The chances of getting one of the meaningful results is less than 1% (I think it’s 1 in 128, given my generous definition of meaningful).
Daniel