I know, there is a small (but finite) probability that 100 monkeys pounding away on keyboars will eventually come up with a copy of “HAMLET”. So, suppose we set out to learn the future-insead, use a supercomputer, that will print all possible combinations of english words. WE can refine it further-have a computer accept onlt certain words-like “nuclear”, “computer”, etc. Then we sit back and let the machine print away. This will print out the future-inventions, news, and trends! We can analyze the patents it prints out-and come up with the inventions ahead of time!
With a fast enough computer, we can generate the news from the future-of course, the machine will print out a lot of garbage-we will have algoriths to weed the crap out!
Could this work?
Could an infinitely long string of random words contain 100% accurate predictions about the future?
Yes. But how would you know what they are? If it prints out CUBS WIN PENNANT TWO THOUSAND NINE does that mean it’s time to go to Vegas? If you had an algorithm that could tell you that it was an accurate prediction, then why do you need the list of words to begin with?
No.
Really, you don’t need it to print out anything at all. Just have your computer run through limitless numbers of simulations, each with different initial conditions and biases. Then, just have it pick the ones you like.
Of course, deciding which ones to pick is the hard part.
:smack:
Even assuming that you could make this scheme workable (in terms of parsing any intelligible descriptions of future inventions or events from it) and you can create a sufficient number of permutations to cover a wide range of credible possibilities, how would this be in any way a prediction of future events? I can walk into a casino and sit down at a poker game with a probability table taped inside my sleeve, but it’s not going to let me predict how the hand is going to play out, only the odds that any particularly combination of cards is likely to appear.
In addition, any particular invention or event is going to be the result of previous concepts or actions, independent of which it may have no context from which to be evaluated. For instance, who in 1800 would have predicted that the United States would be Britain’s strongest ally in fighting a war against Germany and Italy for the liberation of France? The whole notion would be utterly absurd to someone of that era, and prediction thereof would be accorded no weight unless you could also demonstrate the causal chain that would lead up to such an unlikely alliance. Similarly, the invention of an incandescent lightbulb would be of no use to a preIndustrial society with no concept of the generation and distribution of electricity; it would be nothing more than an artistic curiosity.
To place this in our context, let’s say, for example, that you are a science fiction writer and you suddenly come up with a concept for transporting people across long distances by dicing them up into tiny molecular fragments, converting this into some kind of electrical signal, beaming it to a receiver thousands of kilometers, and reconstituting the passenger at the far end. It sounds entirely reasonable, assuming you already have some essentially magical technology for scanning the position and orientation of several billion billion atoms, pulling them apart, communicating this colossal amount of information with error correction in any reasonable amount of time, and (here’s the tricky bit) putting it all back together without getting one’s foot literally stuck in one’s mouth. Otherwise, it becomes a literary conceit and/or plot cheat that is [post=8222235]technically inconceivable even as a general concept.
So for your scheme to work you’d have to be able to identify and place in logical order a development of the prerequisite technologies, increasing the complexity of coming up with any useful information by several more exponents. An attempt to arbitrage the future like this would result in a mass of information in which, even if you managed to come up with something useful would be hidden among the forest of utter gibberish. Consider solving a jigsaw puzzle in which you not only have no idea what the completed puzzle looks like, but is also mixed in with random pieces of other jigsaw puzzles that you mixed in. Even if you found an individual piece of the puzzle that you were actually trying to solve, you wouldn’t have any context to know that it was a valid piece.
You could also, as with Eddington’s infinite monkey theorem, come up with a thermodynamic objection based on the probability of operations to come up with a credible prediction versus the amount of time or degree of entropy increase produced for each operation contrasted with universal expansion. Even if events have a finite probability, if it is so small as to be considered infinitesimal in a reasonable period of time then it can be considered negligible. Purely statistical (i.e. Copenhagen) interpretations of quantum mechanics tell us that the Sun could just totally disappear one day and blink over to the other side of the galaxy, leaving us in the cold and dark. The likelihood of this, however, is insignificant along any time credible time frame for the existence of the Universe.
So in short, it won’t work, and for the same reason that insurance companies occasionally lose money on natural disasters or other unexpected claim increases despite application of the black art of actuarial science.
Stranger
Something a bit like this has been explored in works of fiction - the notion of a library containing volumes comprising all possible arrangements of letters in a string of length N - by definition, containing every written work (past and future), every possible variation on every written work, the details of every invention (past and future), the answer to every question that will ever be asked, etc.
For works longer than N characters in length, the full text will be found by concatenating two or more volumes.
The only trouble is that all possible combinations of letters in a string of length N represents a truly impossible volume of work, for any significantly sized N - so not only would it be impossible to construct such a library, it would also be impossible to access or index it.
Unless you shorten the volumes - the most compact form of the library is indeed quite posrtable and accessible - it consists of only twenty six volumes, collectively containing every possible one-letter combination of the Roman alphabet.
All you have to do to retrieve any written work - any of them - is just to concatenate the volumes in the right order - that is, write it yourself.
I’m pretty sure there’s going to be some information theory here that insists you cannot possibly get more out than you put in.
Mathematically, getting the correct prediction of the future from this corpus would be similar to decrypting a message that was encrypted with a one-time pad. All the information is in there, you just need the correct key to get it - and the ciphertext itself gives you no information about what the plaintext was. Likewise, the correct future prediction is in the corpus - you just have to figure out where it is.
Sure, but I’m not clear on what has to do with your library example.
Crescend beat me to the one-time-pad comparison.
The problem is that the computer will spit out every single possibility there is. Imagine a smaller scale experiment. Assume that a dictionary contains every english word and use it to predict your day tomorrow. There’s a lot of words that may apply such as “rich”, “happy”, “famous”, “laid”, “dead”, but you have no way of knowing which one will be correct, but once the day is over you can look through the dictionary and find a lot of accurate words. I bet at the end of the day, the one that is most accurate is “routine”
Admit it, ralph124c: “You” are a collective of a half-dozen trained simians Justhink’s ex-Ph.D. advisor set to pound out ideas for GQ, and this is one of those times when a couple of the monkeys got into the banana schnapps and posted before anyone could stop them.
The funny thing about monkeys and typewriters is they don’t tend to use a lot of letters. They just hit a few letters over and over again. So, you may have a rather bleak future.
I think your idea was anticipated 67 years ago by Borges.
No.
Your random ordering of words will contain somewhere in its infinite sequence the prediction :
However, it will also include
and
and
and
No conceivable algorithm could possibly tell you which of these predictions is accurate until it’s already happened.
What I meant (I think) is that shorter volumes make for a more compact and indexable library, but require more work in deliberately concatenating them to make sensible information appear, whereas longer volumes will natively contain that sensible information, but require more work in searching for it - so either way, you can’t win.
It was the Borges story I was recalling BTW… and it’s interesting that Borges also notes that any such library must also contain within itself, a perfect index of itself. Except that it also contains every possible imperfect index too…
The information theory comes in when you want to build the text you’re looking for. If you’re looking for a book that contains the text, “See Dick run”, you need to know to look up ‘S’, then ‘e’, then ‘e’ again, etc, and concatenate all these together. In other words, using this scheme, the amount of information you need in order to use it is exactly the same as the amount of information you’re retrieving.
Am I wrong in thinking that the OP actually meant an idea generator instead of trying to predict random events in the future? I could possibly see some merit in having a computer generate combinations of technologies that we haven’t thought of ourselves, then pick the ones that look interesting and investigate if they will work out.
I wondered that. The notion of using randomness to provoke ideas is a good one - I think it’s been done before, and perhaps to good effect (didn’t Salvador Dali use something like this process?).
It appears in several of Edward De Bono’s ‘thinking tools’ too - can’t remember which specific ones, but there’s one where you describe a process or object in detailed, essential steps, then replace one of the essential steps with something random, such as a snippet from a magazine or a word from the dictionary - this breaks the process, and your job is to fix it (but without simply reverting the change) - so it’s not really the randomness that’s doing the work - it’s your creativity, provoked by the randomness…
This thread would not be complete without the web economy bullshit generator. An unparalleled e-business model that some time ago reinvented real-time paradigms and successfully leveraged user-centric e-business.
The current technological problem with a random idea generator is that the amount of manpower you would require to parse any sensible output would be put to better use generating random ideas using pencil and paper.