No offense, and I dont give people credit based soley on their status, but until you do dig into the statistics please don’t rag someone elses knowledge of science. It’s ridiculous.
Yes, that is a brief take. The data set is public and an analyzeable. One means of valid scientific investigation is to look at something and say “huh, that’s interesting” and then come up with an explanation for it. Man knew about gravity and its effects long before he had a theory to explain it. Further, they are in the business of making formal predictions. They are very upfront about the fact that they aren’t sure what exactly the data means. This does not mean that a) the data is inaccurate or b) they are simply subjecting it to a posteriori interpretation.
If you take any random segment of the data, chances are every single one of the 64 eggs all around the world are averaging around .5, both individually and as a group. The significance of the data is that, when these globally-impacting events occur all or most of the eggs, all around the world, produce streams of data that either have more 0s or more 1s than found in any other randomly selected segment of time. Further, it is often a significant difference.
This is true - but it is useful for forming theories about what the data might mean, and if the theories are able to then predict future events we have valid circumstances. So this is exactly what we see: They make predictions when possible, and analyze the data. Even when a prediction has not been formally registered they continue collecting data and then analyze it to see how it meshs with the current state of the theory about what the data means. They then use this to register another prediction based on the updated theory. So much of science operates in this way.
This is not at all pragmatic; you don’t just throw the data away. Any data is subject to analsys when it is only used to continue the formulation of hypotheses. The difference between your drug trial and this is that they aren’t revealing their data for the entire world to see and analyze. You can use anything to formulate a hypothesis so long as you then use the scientific method to prove it. That is what is occuring here.
Initially the hypothesis was that human thought and emotion as a collective had the ability to change these random number streams to exhibit order. Then, people started doing after-the-fact analysis of the data and realized that these deviations almost always coincided with events that were about to take place. This calls for an update to the hypothesis, which is what they have done. What must be further done to prove the hypothesis is that they collect more data, which they are doing, and that their studies are validified by other scientists. Methods may include double-blind data sets and replications under many other circumstances to rule out environmental effects. Before this happens, however, we need a rock solid hypothesis with which to make a prediction on those studies.
There is nothing wrong with the method of Dr. Roger Nelson, and he is one of very few scientists to make his data completely public along with tools helpful in analyzing it. One of the REGs uses quantum tunneling to generate truly random events; it’s possible but simply not likely that the data being generated is not truly random. The skepticism is of course welcome and expected, however, nothing in this thread has been articulate enough to qualify as that.
On preview:
What is your purpose in this thread?