A Machine That Predicts The Future!(?)

No offense, and I dont give people credit based soley on their status, but until you do dig into the statistics please don’t rag someone elses knowledge of science. It’s ridiculous.

Yes, that is a brief take. The data set is public and an analyzeable. One means of valid scientific investigation is to look at something and say “huh, that’s interesting” and then come up with an explanation for it. Man knew about gravity and its effects long before he had a theory to explain it. Further, they are in the business of making formal predictions. They are very upfront about the fact that they aren’t sure what exactly the data means. This does not mean that a) the data is inaccurate or b) they are simply subjecting it to a posteriori interpretation.

If you take any random segment of the data, chances are every single one of the 64 eggs all around the world are averaging around .5, both individually and as a group. The significance of the data is that, when these globally-impacting events occur all or most of the eggs, all around the world, produce streams of data that either have more 0s or more 1s than found in any other randomly selected segment of time. Further, it is often a significant difference.

This is true - but it is useful for forming theories about what the data might mean, and if the theories are able to then predict future events we have valid circumstances. So this is exactly what we see: They make predictions when possible, and analyze the data. Even when a prediction has not been formally registered they continue collecting data and then analyze it to see how it meshs with the current state of the theory about what the data means. They then use this to register another prediction based on the updated theory. So much of science operates in this way.

This is not at all pragmatic; you don’t just throw the data away. Any data is subject to analsys when it is only used to continue the formulation of hypotheses. The difference between your drug trial and this is that they aren’t revealing their data for the entire world to see and analyze. You can use anything to formulate a hypothesis so long as you then use the scientific method to prove it. That is what is occuring here.

Initially the hypothesis was that human thought and emotion as a collective had the ability to change these random number streams to exhibit order. Then, people started doing after-the-fact analysis of the data and realized that these deviations almost always coincided with events that were about to take place. This calls for an update to the hypothesis, which is what they have done. What must be further done to prove the hypothesis is that they collect more data, which they are doing, and that their studies are validified by other scientists. Methods may include double-blind data sets and replications under many other circumstances to rule out environmental effects. Before this happens, however, we need a rock solid hypothesis with which to make a prediction on those studies.

There is nothing wrong with the method of Dr. Roger Nelson, and he is one of very few scientists to make his data completely public along with tools helpful in analyzing it. One of the REGs uses quantum tunneling to generate truly random events; it’s possible but simply not likely that the data being generated is not truly random. The skepticism is of course welcome and expected, however, nothing in this thread has been articulate enough to qualify as that.

On preview:

What is your purpose in this thread?

Yeah, basically, trying to correlate the data with world events at this stage is premature. First they need to establish that they are in fact detecting something other than just random noise. This could easily be done using techniques similar to what I described. If they can do that (which I doubt) then they can start testing possible sources for the signal: sunspot activity, satellite transmissions, seismic activity, world events, whatever … .

If they could demonstrate that two independent sets of EGGs were delivering sychronized data it would be very exciting news indeed, whatever the source of the signal.

I have to agree wholeheartedly with Humanist and Quercus. You can’t mine the data after the fact to test for correlations if you’ve got such a huge data set. You’ll find a pseudoscientist’s heaven and a real statistician’s hell. By the very nature of statistics and large sets of random numbers, you are almost guaranteed to find some significant correlations.

Then you are free to select whatever happens to on the news that day (there’s something that could be described as a major event just about every day). This works especially well if you are allowed to select the length of run, then you can find a significant pattern on any day you want: hence the odd selection of lengths of time for certain events, like the U.S. election.

In real science, you must define a priori what you would consider a major event, and you must allow for the fact that a certain number of your statistically significant results are actually due to random processes. Usually you would use the expectation of mechanism to help you sort out false positives from real positives, but here, the mechanism is unknown.
These folks have a hypothesis that some as yet undetermined process is providing these significant correlations. I propose an alternate hypothesis: that the process producing these significant correlations is the random number generators themselves. Until these folks can show that my hypothesis is unlikely, I can’t see why people should give this claim any credence.
To make it more clear, let’s examine this quote from the website:

It doesn’t take much of an understanding of probability and statistics to know that the machine should produce unequal numbers of ‘heads’ and ‘tails.’ Only over extremely large number sets should it approach an equal ratio. Picking and choosing a limited period of time to examine would allow you to show that it was producing “dramatic shifts.” These fluctuations in ratio of heads to tails should continue as long as the machine is running.

Analize the data before the event occurrs. No predictions.
Analize the data after the event occurrs. Fudge the results. Voila “Predictions.”

Make available for one-time-low-price of $19.99.
Profit.

You guys seem to miss the entire point of the project. Dr. Nelson never published any findings in a journal. The project is in a very early stage of hypothesis forming and data gathering. Look at anything in its early stages and it likely looks like this. The difference is that it’s “paranormal” and therefore people freak-out unduly.

I will clarify (expect a long answer).

Let us suppose that I am rolling 10 100-sided dice. With me so far? Good. Each of them is also numbered on each face so I can tell them apart. Please assume for the purposes of this test that the results are close enough to randomness when rolling dice that we can assume total randomness.

Let us suppose that I am rolling them together 1000 times. This is a much larger data set than the number of trials done in the 1-second observation window listed on the website for many events, with many many more outcomes possible. With me so far? Good.

Here’s where it gets complicated. Let’s suppose for each roll I record not only the number on the dice, but many other factors. X-axis position (order from left to right). Z-axis position(which rolls furthest away from me). The factors of each of the numbers showing. Whether I’ve rolled more evens or odds in a roll. Whether each individual die is even or odd. The number of prime factors in a roll. The prime factors of each die’s number, added together. The number of sides showing that are prime. The total of each roll. Each die’s total score. Each die’s number ignoring the first digit. Each die’s number ignoring the second digit. Each die’s number modulus 2. Each die’s number modulus 4. Each die’s number modulus 25. Each die’s number modulus 50. And countless other ways to interpret this same data.

I am testing for the purpose of finding whether or not I get more 7s (that is, a roll of 07 on any of the 100-sided dice). Let’s suppose that I find that I get a number of 7s about what I’d expect from these 1000 trials (with 100000 individual die rolls, it should be noted). Let’s say that I then start putting the numbers together in other ways, and, after years of complex analysis, I find that I have a much much larger number of even totals for each die roll (that is, the sum of all dice added together) than odd totals. Something significantly outside the standard deviation. Say I get 678 even totals, and only 322 odd totals. Wait, you say, this is a stasticially significant difference. I should look into why this happened.

This is an incorrect interpretation of the data, as I have been given a perpetually infinite number of ways to sort it. Given a near infinite number of ways to sort data, I am bound to find at least one way of looking at the data that gives me a result that’s unexpected. After all, my own individual results are very unlikely. If I could go back in time to before the experiment and hand myself a sheet of paperdetailing the exact rolls of each die in each of the 1000 rolls, I would say that such a result would naturally be extremely unlikely, given that each die has 100 possible results, with 10 dice and 1000 rolls we have 100^10^1000 = 100^10000= 10^20000 possible results. Yet it is the one unlikely result that occurred, and if I begin searching that result for patterns, I’m likely to find a few.

I started with a very small chance of a specific odd occurrence and a clear successful result. This result was within statistical guidelines. However, given a large number of ways of viewing the data, the chance of finding something that varies significantly from statistical norms becomes much larger. Given a large enough number of ways of interpreting the data, in fact, it’s possible that finding no interpretation that causes a result outside of standard deviation would itself be something that is stasticially unlikely.

This is the reason we do not get to reinterpret data post-facto. If we do find something odd in our reinterpretations, yes, it may bear another look with another study designed for testing that specific occurence. But, given data that we can interpret a large number of ways, we should not be surprised when a particular interpretation (out of all possible interpretations) seems to show something that is statistically very unlikely. Doing a new trial where our new (tentative, unproven) findings are the what we’re looking for will yield more accurate results. In the above case, repeating the experiement with an eye towards odd/even totals would likely eliminate the noted discrepancy (but, again, if we looked at this new data enough ways, we would eventually find an anomaly in this trial as well).

Are you stating that no hypothesis has been formed yet? If they aren’t even to the point of forming a hypothesis (something as simple as “random people off the street thinking can influence a REG”, or “a REG can be influenced by global events”), then why are they contacting news outlets? Is funding running out? If Nelson can’t even say for certain that something’s going on, why is he quoted as saying, “It’s Earth-shattering stuff”? Either they’ve found something truly strange and earth-shattering, or they haven’t. If they’re collecting large amounts of data and sorting it in the most favorable manner to see what hypothesis they eventually plan to test, why are they acting as though they’ve already found something, and reporting their findings as “earth-shattering”?

Sorry for the long, boring, dry post, and I hope I’ve explained myself well enough.

One other thing: if they’ve been collecting data since 08-07-1998, and still they have found no methodology that accurately separates a predicted event from a current event from a non-event, I don’t think they will find one. Admittedly, this is just an opinion, but I know of no “experiment” that has been ongoing for over six years without so much as a guess as to what, exactly, the experiment is about.

How many serious researchers in the early stage of hypothesis and data gathering post their stuff on the internet in a manner that makes it look like confirmed fact?

You forgot step 1: Collect underwear.

I think what the majority of us are reacting to here is the leap that this article takes from what has been done to what it might mean. I respect that we are not dealing with a published study here and that this is in it’s “premature stages”. Many, many hypothesis have been in this stage over the years…and most go the way of the Dodo. But this article claims these little black boxes “predict the future” (insert dramatic music here).

So fine, blame the article. Bad reporting. So be it.

But what of the EGGs? Well, I can’t repeat what’s been said above any better than it’s been said by Humanist, Wevets, et. al. But Vern Winterbottom has a good point. Something important will happen tomorrow. And I believe him. I do. Because he has any of a million million events to choose from tomorrow, any one of which may prove revolutionary in hindsite. (Or…at least as important as the death of Bob Morris, Og bless).

I would love nothing more than to believe that all of humankind is linked by a single unconsciousness…sort of poetic. But if we are, I don’t think purturbations in random number sequences are going to be how we come to understand it. :dubious:

This may or may not be true (I don’t know). Even if it is true, this does not tell us anything about whether there is a genuine effect here for which we need a working hypothesis. To repeat a point I made before, the more examples of this kind of thing (junk journalism about junk science) one has seen before, the more measured one’s response to this particular instance is going to be. Suppose there are lots of scientists working on it. So what? There were a darn sight more paid to work on cold fusion, but it turned out to be moonshine. If something is nonsense, it remains nonsense even if lLots of scientsts work on it.

When the cold fusion stroy broke in papers like the Daily Mail, which exhibits a consistent contempt for science and intelligence, the skeptical position was to suggest waiting until the Pons / Fleischman experimental data had been through the peer-review and independent replication process before making too much of a fuss about cold fusion. This was good advice. Scientists and research bodies who ignored this advice wasted a lot of time and money (whcih could have been diverted towards something more useful) on rubbish.

Likewise remote viewing, metal-bending, self-sharpening razor blades, non-rusting iron pillars… rubbish is still rubbish, even if scientists are said to be working on it.

Again, so what? Lots of things get published on the internet, including ‘proofs’ that evolution is a big lie, that this or that race is superior to all the others, that the British royal family are alien lizards and so on. Publishing on the internet doesn’t mean a darn thing. When the experimental data is published in a respectable scientific journal, subject to peer review and independent replication, then we might have a phenomenon to discuss. Until then, it’s on no firmer a footing than invisible pin unicorns.

Excuse me if I decline to take advice about fighting ignorance from someone who cannot understand the distinction between a possessive pronoun (“your”) and a contraction (“you’re”). The word you meant to use is “Your”. I would not have pointed this out had you not started pontificating to the rest of us about how to fight ignorance.

There is no correlaton between the points I made and this bizarre attempt to paraphrase or summarise what I said. I made no reference to anyone being famous. I cannot see what relevance fame has to the question at hand. I never suggested that anything must or must not be true.

No specifics? No publications?

http://www.princeton.edu/~pear/publist.html

Sorry, but these guys just don’t seem like your run-of-the-mill wackos.

And we can all enumerate famous hoaxes and bogus scientific theories until we’re blue in the face, and while it might make us feel better about ourselves as true skeptics rising above the ignorant masses, it has nothing, nothing, nothing to do with whether there’s the slightest bit of merit in any of these claims. No, it wouldn’t be the first time. Really. Wow. You don’t say.

So dial back all the stuff about predictions, black boxes telling the future, and consider the basic idea the people at PEAR are investigating: given two random event sources, one deterministic (seed-based, pseudo-random) and the other non-deterministic, both evaluated against the most stringent randomness tests available, operator intention can be shown to have a statistically significant effect on the latter source but not the former.

I’m not even close to being able to judge one way or the other; the math here is just way over my head (I had a hard time just making it through Humanist’s post). But at some point, it’s either statistically significant or it isn’t, yes? I mean, it’s not a big stretch to assume they (PEAR) are right, because they’re only pointing to a less-than-one percent deviation. You’d expect about 50-50; they’re getting something on the order of 51-49. No big deal, right? But they claim that given the number of trials, it is. So is there a point at which the results would be significant for everybody? 52%? 54%? Because if it’s 100%, we might as well keep talking about cold fusion.

One more question: isn’t there some principal from quantum physics that tells us quantum behavior is affected or even partially determined by observation? A quantum bit is in both the 1 and 0 state until we measure it, at which point it becomes one or the other. I’m no physicist, but it really doesn’t seem like a big leap from that to psychic power to me, particularly if we’re talking about quantum random number sources, particularly if we’re talking about such a small measure of effect (< 1%).

I mean the whole idea of quantum superposition isn’t something that’s ever been adequately explained to me; in physics class, we were told to just take it on faith and get over it. Because even though it doesn’t make sense, even though no one can really tell you how something can be two things at the same time (without starting to talk about other universes), we have mountains and mountains of data and replicable experiments that tell us it’s so…now. But I assume it wasn’t always that way. Again, not being a physicist, it seems to me that at some point, some brave soul had to suspend a big chunk of disbelief and listen to what some trickle of improbable data was saying…

I stand by my original comment that up to that point the thread was largely populated by comments that did not belong in GQ. I’d also like to point out that I type 120 words per minute, after you subtract for errors. When you can type as quickly as you can think, and each one pushes the other to go a bit faster, get back to me about grammar errors. Back OT?

Grammar flames and typing speeds aside, I’ve been reading his site more and I’m not impressed. Here are some of the things that bother me:

Their most common type of detector, the Mindsong, has an unexplained bias to positive correlation that the other detectors don’t. The other detectors XOR their data with an alternating pattern of 0s and 1s. The Mindsong XORs its random stream with a longer bit pattern that has more 0s and 1s next to each other.

This bias suggests to me that there’s a degree of temporal coherence to the random bit stream in all the detectors. This manifests itself as a higher degree of correlation when the data is XOR’d against longer runs of similar digits as it is in the Mindsong.

This alone is enough to through the entire project into question. If the random number generators aren’t really random, then the whole thing falls apart.

I’m also troubled by his method of analyzing the data. For interpreting the results he mostly seems to rely on graphs where he’s summing the cumulative variances over time. This is a questionable methodology. This means that the displacement of the trend line is going to naturally get more extreme over time no matter what happens. This is a Very Bad Thing, because it makes it seem as though bigger and bigger effects are happening after the event, when this is really just an artifact of the graphing scheme.

It would be far better if he would pick a set of fixed intervals to sum over instead of taking a cumulative sum from some arbitrary starting point. I’d be very interested to see when these graphs looked like if he was plotting, say, the sums of all the variances over discrete five minute intervals. I suspect his results would be far less interesting.

This would also allow the data analysis to be decoupled from arbitrarily selected world events. Because his current methodology requires a starting point to begin the sums, he’s only looking at the data at moments in time when he thinks it should jump. If it jumps, he assumes a correlation between the world event and the jump. If it doesn’t jump, he assumes that the event wasn’t significant enough to trigger a shift in the variance.

What he’s NOT doing is looking at the data to see if it still jumps even when he thinks it shouldn’t. That’s the real acid test, but one that he’s made very difficult to do with his method of cumulative sums.

Yes, it’s not a paper submitted to a peer-reviewed journal, but the fact someone like me with a B.S. in Electrical Engineering that he hasn’t used in almost 20 years can spot such simple methodological flaws in the research suggests a lack of rigor that calls the whole endevour into question … .

Pochacco and others are making some very interesting posts to question this thing, though I confess I don’t totally understand them. (The questions about the basic interpretation seem pretty obvious, though.) This would be an excellent and very timely question for Cecil to deal with.

Journal of Scientific Exploration?
Journal of Parapsychology?
Foundations of Physics?
Alternative Therapies??

[Inigo]
I am not sure that cite means what you think it means.
[/Inigo]

Do any physics types out there even know of these journals?

You’re right, though, their publication list doesn’t look like run-of-the-mill wackos. It looks like wackos who are too well-funded and have too much time on their hands.

I posted this in the the GD thread, but this one seems a bbit more active.

An evening with Dean Radin.

Shows some major post hoc work at large.

“Eggs” generating random 1’s and 0’s ad infinitum.

Untill and if a rigorous double blind study is conducted and confirms results with predictions before the fact, not confirmations aftwerward it is an exercise in futility.

"Much ado about nothing (1’s & 0’s) reminds me of:

“”… I was sent for to London, to be ready to explain to the Queen why Otto van Guericke [of ‘Madeburgh hemispheres’ fame] devoted himself to the discovery of nothing, and to show her the two hemispheres in which he kept it, and the picture of the 16 horses who could not separate the hemispheres, and how after 200 years W. Crookes had come much nearer to nothing and had sealed it up in a glass globe for public inspection. Her majesty however let us off very easily and did not make much ado about nothing, as she has much heavy work cut out for her all the rest of the day…"

One is reminded of a remark that King Charles II had made about two centuries earlier:
“These gentleman spend their days debating nothing.” His majesty was complaining of the fact that Robert Boyle, Robert Hooke and others were spending much time working with the vacuum pump that Hooke had invented, and seemed to be wasting their time on nothing. Queen Victoria has no such concerns; she was fascinated by scientific “toys” such as the radiometer and the kaleidoscope, which Sir David Brewster (1781-1868) had invented in 1816."
Source: The Genius of James Clerk Maxwell

There is a vast difference between real and pseudo science.

You know, just this alone, if replicable, is enough to completely shake the foundations of physics, and get the Nobel prize – plus a million dollars from James Randi. If this is true, I wonder why, in the ensuing thirty years, nobody’s published these results yet, or went out and gotten the million dollars to continue their research.

This sounds like a theory worthy of looking into. It certainly seems to correlate with the spikes.