What are your favorite thought experiments?

What are your favorite thought experiments and why?

My example is the experience machine by Robert Nozick. It serves to show whether the person being asked values hedonism over anything else, whether they value what’s real over what’s not real and to what degree are they satisfied with their current life. I personally would choose to enter the machine though my answer would change depending on what my life is currently like.

I don’t know if this counts, but someone here introduced me to the Veil of Ignorance and I haven’t been able to stop thinking about it. If I had no control over the state I was born into, what kind of society would I want to be born into?

So the first thing that comes to mind for me are the various “thought experiments” involved in game theory. The most famous being The Prisoners Dilemma, which accurately describes a bunch of situations in human society and non-human biology (though in most cases you encounter it IRL the result is not the “rational” outcome predicted by game theory). Though I’m not 100% sure it counts as a thought experiment, I mean you can carry it out IRL, and people do (the former president of the US is finding himself of on wrong end of the prisoners dilemma)

Similarly (and less well known) is the dollar auction which IMO more accurate in predicting behavior in situations like WW1 where the belligerents continue to bare costs far higher than any benefit they could receive by winning the conflict.

Also a couple of more classical (or rather non-classical :wink: ) examples:

Schrodinger’s Kittens: similar to Schrodinger’s cat, but instead of just one cat you have two kittens, same non-PETA approved setup, but you use two entangled particles and ensure one kitten is poisoned if the spin is in one direction, and saved in the opposite direction. You send the (presumably very long lived) kittens to opposite ends of the galaxy, then open one box. As the original Schrodinger’s cat you have now collapsed the wave function for that kitten, but you also instantly collapse the wave function of the other kitten, even though it’s now many light years away.

The Train/Tunnel Paradox: You drive a 1000m long train into a 800m long tunnel. As the train in travelling at 0.8c, to a stationary observer it appears that the train shrinks due to relativity so the entire train is inside the tunnel. Which is fine except that stationary now presses a button to close the gates at either end of the tunnel once the train is entirely inside the tunnel, and press it again to open them before the train leaves. What about the trains frame of reference where the train is never entirely inside the tunnel? It turns out in that frame of reference the doors at the start and end still open and close but not at the same time. As the train sees it the door at the end opens and closes before the front of the train reaches the exit, and the door at the beginning then closes after the back of the train enters the tunnel. I find this super trippy.

While I was a student at Defense Language Institute, I was introduced to this one:

A man murders a wino on Skid Row and then cuts off the wino’s arms. The man mails each arm to a different address. The man then goes the train station. Another man sees him, becomes enraged, and kills the first man. Why did the last man become enraged and kill the first man?

IIRC, the person posing the question can only answer Yes/No questions. One variant is the poser (heh) can answer “Yes”, “No”, “Irrelevant”, or a couple of other choices that really don’t dissuade from the experiment.

This might not quite be what you mean by thought experiment, but I often think of the concept of the Library of Babel, which contains books with every possible sequence of letters. So within it are sets books that with 100% accuracy describe, for instance, every day of your life from your birth to your death. And also sets of books that are 100% accurate except for getting what you had for breakfast on one day wrong (with versions spanning every breakfast of your life).

I think of it more larely in terms of generative AI and of a “photo album” of Babel. A library of every possible combination of 1024x1024 pixels x 16 million colors is beyond vast, and contains accurate images of every moment in the history of the universe, plus images that are accurate except that everyone is wearing funny hats.

Those are what we’re calling “lateral thinking puzzles”, over in Thread Games. I don’t think we’ve done that one, so if you remember the solution, go ahead and pose it over there.

Happy to so do. Can you give me a link so I head to the right place?

The two envelope paradox.

You have two sealed envelopes. Each envelope contains an amount of money. One envelope contains exactly twice as much money as the other. You cannot tell by examining the unopened envelopes how much money is in them. You are allowed to pick one envelope, open it, and keep the money in it. Your goal is to obtain as much money as possible.

So you pick one of the envelopes. But before you open it, you’re offered an opportunity to swap envelopes. Should you swap?

Math says you should swap. Let’s say the amount of money in the envelope you’re holding is X. That means the other envelope contains either .5X or 2X. Which means the expected value of the other envelope is 1.25X.

Plug in a specific amount. Let’s say the first envelope you chose hold one thousand dollars. That means other envelope holds either five hundred dollars or two thousand dollars. If you swap you have a fifty percent chance of gaining one thousand dollars and a fifty percent chance of losing five hundred dollars. Your potential gain is greater than your potential loss. So you should swap.

But when you swap and pick up the other envelope, you’re offered another chance to swap back. And now the same math which told you to swap the first time says you should swap a second time. No matter which envelope you’re holding in your hands, the other envelope has an expected value that is twenty-five percent greater.

Found the thread and posted it.

Also known as Two Envelopes Problem. That wiki entry on it is fascinating.

I came here to post about the Veil of Ignorance. I learned about it years and years ago and like you I still think about it very often.

Years ago on a comedy website I saw an article that talked about the old idea that a monkey at a typewriter, pounding randomly on the keys, would eventually manage to type out Hamlet. The author posited that there were probably many other things the monkey would produce before managing to achieve that feat, and then listed numerous examples:

  • Several perfect cryptograms of Hamlet.

  • A long Usenet argument over whether Boba Fett is alive, complete with spam-blocked e-mail addresses.

  • The phrase “Jesus Christ my ass is chafed” repeated for the length of two letter-sized pages.

  • The text of Hamlet, except everyone dies of food poisoning in Act II.

  • A brief but accurate write-up of the most embarrassing thing you ever did, with full names, dates, and places.

  • Hop on Pop

  • The Denny’s Kids Menu.

  • This article, including HTML mark-up.

  • A short story entitled “Babysitter’s Passion.”

  • “Iii#jd89 pp98&(*(^9 879j; FF”

  • The text of Hamlet, except that Horatio is named “Elvis.”

This sounds like a potential information hazard. Am I better off not looking this thing up?

Consider a world with a veil of ignorance in it, but you don’t know if you will be in the group that looks it up or the group that doesn’t look it up…

The world in which everyone is familiar with and applied the Veil of Ignorance is clearly better regardless of who you end up being :stuck_out_tongue:

Here’s the short version. If you want to figure out what the more moral outcome to a situation is, imagine that the task of deciding which world is better is up to ethereal beings who exist outside the universe. They know that once they pick the best world, they will be born into that world as a random member of it.

So a system like Feudalism would be rejected by this framework because even though kings get to live like… kings, most people are poor peasants with incredibly shitty lives, and these ethereal beings wouldn’t want to risk being a peasant just for a tiny chance at being a king.

Are you going to submit that to the lateral puzzles thread?

I’ve had a similar idea about computer monitors. A 4K monitor is 3840 x 2160 pixels (8,294,400), and each pixel is capable of 2563 unique colors. So a 4K display could show:

(2563)8,294,400 unique images.

Wolfram Alpha says that in standard base-10 notation, this is a number about 60 million digits long. One year of 4K video footage at 30 frames per second is about a billion frames - a one followed by nine zeros - so the above gigantic number represents X years of footage, where X is a number that’s about 51 million digits long. To be clear, that’s not 51 million years; 51 million is a number only eight digits long. This is much, much bigger. I’m pretty sure my monitor could show every page of every 410-page book in the Library of Babel, and still have enough unused images to show every moment of every human being’s life from every angle. And lots of other stuff.

I guess where does logical paradox become a thought experiment? E.g.:

Alan Turing’s proof of the halting problem. Namely: is it possible to tell if a turing machine (a generic hypothetical computer, capable of computing anything any real computer can compute) will finish in finite time, or “halt”? If is this is possible you could encode that logic (whatever it is) as a turing machine (because anything computable can be encoded as a turing machine), so you’d have a turing machine that could tell if another turing machine will halt. It would be then trivial to create a turing machine that will loop forever (i.e. never halt) if the input machine halts, otherwise return true. You then point this machine at itself. What will happen? If it halts, it will loop forever, but that means it won’t halt, but then it won’t loop forever, but now it will halt so it will loop forever, and so on and so on. This is a logical paradox, and hence a proof that it is not possible to know if a turing machine will halt.

Check a couple of posts up. I already did. :slight_smile:

A cousin and best friend discussed this last night. Some think that the Universe ends, and it’s just ‘nothing’ beyond it. Well if it’s defined as nothing, and has a definition, that means it’s something.

A complete and total vacuum if it could exist with our mortal powers, is actually something. Ergo, it is something.

Have a nice day.