After reading this article I must say it’s very interesting. I don’t know whether I buy into it or not, but I am curious what my dear Dopers think about it.
Well, there’s two points to make. First, the ‘unequal’ amount of heads or tails is actually expected by probability theory. The overall number of heads or tails will even out to fifty-fifty, over time, but there is no probability change, following a tenth 1, that the next number will be a zero.
Secondly, if they’re doing it with proper controls, more power to 'em. Got to replicate it a few times, though, before we’re sure there’s anything there.
The studies described in your link sound like they have a great deal of merit.
Incidentally, I have observed that my computers tend to freeze up (and prior to Win XP, give me the beesod) far more frequently around the times when I am pissed off. As a matter of fact, I have noticed that they anticipate my anger by starting to malfunction as much as several minutes before I start to utter uncharacteristically vulgar expressions. Therefore, I must be able to influence my computers subconsciously.
I got the link from Slashdot and the discussion there initially was surprise that it wasn’t a joke article and they did their background on the professors. But one thing they did find was that the source was The Daily Mail (UK) which is apparently quick on the uptake of extraordinary scientific finds, so that cast a shadow of doubt on it. I think it’s very interesting.
An interesting story is that I’ve always been a big believer in naming your computer. I did a lot of computer help my freshman year of college and I told everyone to name their computer. A lot of them thought I was nuts. I wasn’t doing it to encourage positive thought towards the computer, more to encourage them to be reasonable with it. When they’re doing twenty things and then it locks up, it isn’t the computers fault it finally choked under the workload.
But perhaps naming and thinking positive is even better than expected…
That’s the funniest thing i’ve read all day
Though, my 'puter never seems to break. It’s only when i attempt to install something that it sometimes gets biligerant. So in this example, an EGG would go screwy when i decided “I’m going to get a webcam today”
Do you think the soul of a computer lies in it’s CPU or Harddrive?
The heart or the sum of it’s knowledge, so to speak. A valid question i think, if it were sentient
While Roger Nelson is a real guy searching some off things, sometimes he let’s some folks take his data and run a little to far with it. Look at this article that seems based on the GCP called an evening with Dean Radin.
Seems there is a lot of ‘post hoc’ analysis invovled in this stuff.
I have had the same computer since 1985. I’ve changed every part on it since then. I’ve moved data from hard drive to hard drive, I’ve changed motherboards, but kept the case, power supply, video card, modem, CPU, floppy, and other drives. It’s been struck by lightning once (hard drives survived), and it’s been upgraded from Dos 3.1 to WinXP Pro.
Still the same computer. So you tell me.
As far as where the data lies, that quote does make me wonder a bit more. But, well, we’ll have to see how it goes. It’s still an experiment in progress. It’ll either be something, be nothing, or be a Putoff-Targ.
Hold on a second, let me see if I’m reading this right.
If the machine shows an odd-ish pattern (we are left guessing as to what that is. A high number of ones? A high number of zeroes?) while an event is happening, it’s “recording” that event. If the machine shows an odd pattern while an event is not happening, it’s “predicting” an event.
If people’s electronic skin resistance changes with a change in moods (cite, anyone?), and they’re toold that they’re about to see some disturbing images, wouldn’t they be preparing themselves for that? Or, alternatively, if their change in resistance happens at a differeent time than their change in moods, wouldn’t that suggest that the two are, in fact, unrelated?
Now, if they can prove that the thoughts of some guy off the street can influence a machine to increase the proportion of 1s to 0s, then they may have a Nobel prize on their hands. However, given the other statements listed in the article, which strongly suggest that they are reinterpreting results after major events happen, or interpreting them in light of ongoing events, I’d suggest that there’s probably nothing going on here. What’s going on is probably more akin to this:
“I asked a guy at 10:55 to try to influence it to print out more ones. At 10:55, the machine was clearly printing out more ones, so it’s working.”
“I asked a guy at 1:02 to try to influence it to print out more ones. At 12:59, it printed out more ones, so it was clearly responding in advance to his thoughts.”
“I asked a guy at 2:34 to try to influence it to print out more ones. At 2:36, it printed out more ones, so its response was clearly slowed by the ripple effect of the Prime Minister’s speech.”
Looking at the hard data on their website, I notice that their “resolution” seems to be way, way varied per event. I notice many, many recorded intervals of “1 sec”. According to their introductory data, one of their devices does 12 repetitions per second, another does 65.
“Most specimins specify contiguous data blocks.” A contiguous data block of 1 second is not very many trials at all. Let’s assume a 4 minute speech by a president (not unreasonable, as several “accepted” events (what are not accepted events, I wonder?) are in fact speeches by certain famous people). A 4 minute speech will have 240 1-second intervals, each with 12 (or 65, depending on which machine they use to measure it) trials.
Out of 240 intervals, it’s not unreasonable to assume that one of them will be off, slightly, just by chance. Looking at the part of their website where they talk about optimum block (observation) times, it seems like they are choosing their block size based on what makes the event seem significant. When you count that even if there’s nothing major going on globally at the time the machine registers anomalies, it can be interpreted to predict any event as much as 4 hours in advance, well, you’ve got a recipe for finding correlation for everything the machine finds. Here’s a pertinent quote:
So, the focus of their effort is making it easier to identify these events. So, they’re poring over the data, trying to find correlations to real events. While their original data is collected blind, they are removing the blinders to draw conclusions. This is very non-scientific. If I do a study on a new drug that is supposed to reduce the incidence of cancer remission, and I find that it does not do this, I should not pore over that data to see if the drug does anything else. This is because with the many other ailments that people with cancer sometimes suffer from, I may be able to find a correlation on one ailment (say, vomiting) that has no relation to what the drug actually does. In scientific study, all correlations found after the blinders are lifted are immediately suspect.
In this case, the blinder is the fact that the future is unknown. If they can look at their machines’ output and say “Oh, look, an event of epic proportions will happen in 4 hours,” I will stand up and take notice. But if all they’re doing is reinterpreting these curves of 1s and 0s after the fact, well, anyone can do that.
And they’re looking for “any non-random structure”. They can claim any pattern they choose as non-random. To put it another way, they can look at any major historical event, find a pattern that happened any amount of time earlier that they choose, say that this is a “non-random” pattern, and then begin looking for correlating this pattern with other historical events.
I see nowhere on their site a list of how many “meanshifts” (the vast majority of their “non-random patterns” are listed as these) happen on a day that they consider non-historical. (I should note that the Republican National Convention and the Clinton impeachment acquittal are listed amongst their “historical” dates so I would be hard pressed to find anything that didn’t qualify.) Nor do I see how large of a meanshift is necessary for them to list it. If this information could be listed somewhere, a bit of polynomial regression would show how likely it is for 1 second out of the 4 hours (14400 seconds) before the 9/11 attacks to be out of line. I suspect that it’s pretty likely.
This experiment is a perfect example of why scientists reject unblinding the data before a conclusion is reached.