What's your favorite lateral thinking puzzles?

Who cares?

Supposing I can eyeball the location of my car with certainty to within one feet. Suppose I look and estimate that it’s 100 feet from the wall.

If I estimate that it’s 100 feet from the wall, might it actually be 99? Yes.
If it was actually 99, might I estimate that it was 98 feet from the wall? Yes.
If I estimate that it’s 98 feet from the wall, might it actually be 97? Yes.
If it was actually 97, might I estimate that it was 97 feet from the wall? Yes.
If I estimate that it’s 97 feet from the wall, might it actually be 96? Yes.
If it was actually 96, might I estimate that it was 96 feet from the wall? Yes.
If I estimate that it’s 96 feet from the wall, might it actually be 95? Yes.
If it was actually 95, might I estimate that it was 95 feet from the wall? Yes.

If it was actually 3, might I estimate that it was 2 feet from the wall? Yes.
If I estimate that it’s 3 feet from the wall, might it actually be 2? Yes.
If it was actually 2, might I estimate that it was 1 foot from the wall? Yes.
If I estimate that it’s 1 foot from the wall, might it actually be crashed into it? Yes.

All these statements are true. Does this mean that if I look and estimate my car to be 100 feet from the wall, that I should put in an insurance claim right away?

Except as I’ve demonstrated, you’re wrong; they all know that everyone knows that everyone knows…etc…that there is at least one blue-eyed person. Again:

Each islander A looks out and sees N blues. As his first calculation, because he sees N blues islanders, and they all see each other, he knows for a solid fact that each blue (and indeed, everyone in the tribe) sees at least N-1 blues.

As his second calculation, A deduces that everyone in the tribe will have performed the same first calculation as he did, and thus A knows that everyone knows that everyone sees at least N-2 blues.

If everyone knows that everyone sees at least N-2 blues, then everyone knows that everyone knows that everyone knows that … that everyone knows that everyone sees at least N-2 blues - without the visitor telling them anything, because the fact that everyone sees at least N-2 blues is common knowledge.

A can’t deduce that everyone in the tribe will be able to perform all the same calculations as he does, because A doesn’t know that everyone sees the same thing he sees. If A sees N blues, then A can be sure that they each see at least N - 1 blues, but he can’t be sure that they each see N blues because he doesn’t know whether or not he himself is blue.

Please answer the specific questions in post #153.

Why not? Multiple people on this board have shown you why it is. Have you ever taken a mathematical proof class and learned proof by induction? As been said before, everyone knows there are blue eyes. The thing is, not everyone knows that everyone knows there are blue eyes.

Since enough of us have explained in various reasons why it works, why don’t you explain why it doesn’t.

Well, it is true that everyone knows that everyone knows that there are blue eyes, so long as there are at least 3 blue-eyed people. But not everyone knows that everyone knows that everyone knows that everyone knows that … that everyone knows that there are blue eyes, where the number of “everyone knows that” is equal to the number of blue-eyed people. As you note, this has been explained in numerous ways; there’s no point continuing to try to hammer it home. People either will respond to the actual argument or continue ignoring it.

A can pretend he’s B - with the added knowledge that everyone does see the same thing that B sees. As B-with-the-added-knowledge-that-everyone-does-see-the-same-thing-that-B-sees, he can assert confidently that nobody sees no blues, if (as B) he sees any blues at all.

Case 3 explicitly includes the possibility that one is in case 2. A person in case 4 can rule out that they’re in case 2, thus they can be certain they’re not in case 3 (which explicitly includes that possiblity).

A doesn’t know that B sees the same thing that A does. This is because what B sees is of course the result of taking what A sees and swapping A in for B; but A doesn’t know that A and B have the same eye color.

You’re not actually answering the questions I asked you. Please answer each individual question yes or no; then we can discuss whether the inference I would like to draw from them is valid.

The actual argument is wrong.

Here’s a similar argument:

You have a rock tied to a ten foot rope. Hold the end of the rope and the rock at the same level, and drop the rock. For convenience, gravity is magic and objects fall at a set rate here: 1 foot/second.

Case 0: the rock has fallen 0 feet.
Case 1: the rock has fallen 1 feet.
Case N: the rock falls to a position 1 foot further from the position at N-1.

Holding tightly to the end of the rope, use induction and tell me whether the rock ever stops falling.

I never said A thinks B sees the same thing A sees; read again.

Only after you look at post post 161 and tell me that you think the car is impacted into the wall, and at post 167 and tell me that you think the rock falls forever.

You’re skipping people! You’re saying that A-B is the same as A-C. We know that already. But what you’re not getting is that A-C isn’t the same thing as A-B-C. You keep trying to figure out what A thinks about what’s in D’s head. That’s not the case. A isn’t trying to figure out what’s in the last guy’s head. He’s trying to figure out what’s in B’s head about what’s in C’s head about what’s in D’s head. And you keep jumping straight to A-D instead of A-B-C-D. Yes, A knows that B is wrong about certain permutations being in C’s head. SO WHAT?! He still doesn’t know if they’re in B’s head or not. One more time:

Reality - “There is 1 possibility”
D - “Of the 16 possible permutations, 2 are possible.”
C - “Of the 16 possible permutations, D has it narrowed to 2, one of which I know he’s wrong about. I don’t know which 2 he’s thinking of, but I’ve narrowed it down to 4. It could be this or it could be that pair, even though I know he’s wrong about one member of each pair.”
B - “Of the 16 possible permutations, C’s narrowed D’s thoughts down to 4 permutations. I don’t know which 4 though- the set where I, myself, am blue, or the set where I’m brown. It’s definitely a set of 4, but I don’t know which one it is. So C is thinking of 4 out of a possible 8 permutations.”
A - “Of the 16 possible permutations, B has C’s thoughts about D narrowed down to 8. But I have no idea which 8 it is. It’s either this set of 8 or that set of 8. But it’s definitely 8 out of those 16. And one of those 16 is the pattern 1111, or brown-brown-brown-brown.”

Is A saying that D thinks 1111 is possible? No, he’s not.
Is A saying that C thinks 1111 is possible? No.
Is A saying that B thinks 1111 is possible? No.
Is A saying that C thinks that D could be thinking 1111 is possible? No.
Is A saying that B thinks that C thinks that D thinks 1111 is possible? No.

He’s saying “It is possible that B thinks - erroneously - that C could be thinking that D could be thinking 1111 is possible.”

(Just a minor point, but I would prefer the “is not able to rule out” terminology to the “thinks erroneously” terminology, since the former is closer to what actually goes on; that is, these are perfectly logical people who never infer things that are false, but they may fail to infer things that are true)

It’s not demonstrable that for all N, if the rock ever falls to N feet, then the rock will at some point fall to N+1 feet. Accordingly, one cannot use induction to demonstrate that the rock never stops falling.

That’s because you’re using a “might” modality; the actual distance of the car from the wall doesn’t entirely determine the yes/no answers to questions about whether or not you will estimate the distance of the car from the will to be X. So, in this case, the analogous reasoning to the islanders will not go through.

Look at it this way: what D is and is not able to prove is entirely independent from what D’s eye colors are. Knowing what D’s eye colors are gives you no information at all about what D is and is not able to prove. None whatsoever. Any information you happen to have about D’s eye colors is of no use to you in obtaining information about what D is and is not able to prove. Similarly, knowing what C’s eye colors are gives you no information at all about what C is and is not able to prove. And so on and so on. So it turns out the truth value of a statement like “A is able to prove that B is able to prove that C is able to prove that D is able to prove that [whatever you like]” is entirely independent from the actual eye colors of A, B, C, and D; if this statement is true for one assignment of eye colors to A, B, C, and D, then it’s true for all of them, because changing A’s eye color doesn’t change what A is able to prove, changing B’s eye color doesn’t change what B is able to prove (nor, therefore, what A is able to prove about what B is able to prove), and so on.

“X is able to prove P in case C” is a fancy way of saying “P is true in case C and P is true in the case which is like C but with X’s eye color flipped”.

WRT the question about juggling across the bridge, did we decide that Chronos et.al. were correct and Chessic Sense had it wrong?

I don’t believe Chessic Sense acquiesced the point. In fact, I don’t believe anyone has changed their minds about anything in this thread. :slight_smile:

I like my “erroneously” terminology. B says “It’s possible that X is true” when in fact it’s not possible that X. That’s not an error in judgment on B’s part, it’s an error of fact- she doesn’t know it’s impossible.

It’s the same if I hold up the king of spades facing me and ask you what the card is. You could say “It is possible that it’s the 2 of diamonds.” when in fact it is not possibly the 2 of diamonds at all. That’s the erroneous part. It doesn’t mean your reasoning is flawed, just your conclusion.

begbert is getting hung up on this difference. A knows that some of B’s possibilities are wrong and he’s getting frustrated that A still considers them possible. What he doesn’t get yet is that we’ve never said A considers it possible…just that A knows B is erroneously still holding on to that possibility just like you with the 2 of diamonds. Not that B reasoned wrong…she just doesn’t have his info.

Possibly. It depends on how hard Max tosses the ball initially, how fast he juggles, lhow long the bridge is, and how close to “the line” you consider him able to juggle. If you plot his weight over time, before he steps on the bridge, the curve shoots above the 171 line. This is the “positive” space we’ll use to cancel out the negative space that occurs while Max is on the bridge. There’s also a positive space for when he catches the falling ball at the end of the bridge when he’s safely on land again. The best strategy is for his last “iteration”, to get all 3 balls airborne at the same time and catch 3 very heavy falling balls once he’s off the bridge. For the length of the bridge that I gave, it’s not humanly possible. For a shorter bridge, it is possible. How long is too long? I don’t know and I don’t want to do the calculus to figure it out. Essentially, the height of the balls gets closer and closer to Max’s hands as he walks, because he’s not putting as much energy into the system as he’s taking out. And he gets to throw the last ball extra hard because 2 are already airborne, so there’s some bonus time there. A specifically designed robot and bridge could probably do it, but no human could.

I suppose; of course, in this sense of “it’s not possible”, nothing is ever possible except for what is, in fact, the case. Regardless, my wording quibble was with “B thinks erroneously that …” giving the impression that B thinks something which is false, which never happens, given that B is a perfect logical machine. I don’t want begbert2 or other skeptics to think “No, B is logically perfect and can’t think anything false, so what nonsense are you spewing?” and reject the point being presented for this reason. The sense in which B thinks “It’s possible that…” is different from the sense in which “It’s possible that …” is not true. Specifically, the former is B thinking “It’s compatible with the information I have that…”, while the latter is just “But it’s not actually true that…”, which would not make B’s assertion erroneous.

I’ll agree with this, and I’ll tell you exactly how long is too long. The bridge is too long if Max is required to catch a ball while standing on the bridge.

I’m really confused about what the point of confusion is about the islanders. Though I’m not sure that I buy the notion that the visitor imparted some meta-knowledge (A knows that B knows that C knows that…) to the islanders. As far as I can tell, the visitor’s statement just provied a starting point for deductions, so that A, B, C, etc. are all thinking the same thoughts at the same time.

It’s not so much that the visitor’s statement directly imparted meta-knowledge as that it happens to effect a state of meta-knowledge which did not exist before. As I noted above, if the visitor happened to privately deliver his message in secret to each islander, the same effect would not occur.

(Before the statement, it was in fact false that everyone knew that everyone knew that everyone knew that… everyone knew that there was at least one blue eye, at suitably high levels. After the statement, this was true at all levels; it became “common knowledge”, in the technical sense of this term in epistemology. Not only do the islanders gain new knowledge about the epistemological facts, but intertwined with this, the epistemological facts themselves shift in truth value (as of course always happens whenever knowledge shifts). (Incidentally, even before the visitor, the blue-eyed islanders were all thinking the same thoughts at the same time, by virtue of being in isomorphic situations, so that’s not what the problem was). It’s perhaps easiest to see if the whole thing is spelt out abstractly in a suitable modal logic, though the only difference between that and any other way of presenting the argument is the cut-and-dry formality of it.)

I know we’re in agreement here. We’re just trying to find the right words. To be succinct (meaning, don’t pick on my words here):

Every day, D writes down on a paper what the possible permutations for the island are. C then records what she thinks D wrote. B then records what she thinks C (not D) wrote, and then A writes what he thinks is on B’s paper.

Day 0, pre-missionary: A has 16, B has 8, C has 4, D wrote 2, and reality has 1.
Day 0, post missionary: A has 15, B has 8, C has 4, D has 2.
Day 1, post noon: A has 11, B has 7, C has 4, D has 2.
Day 2: A has 5, B has 4, C has 3, D has 2.
Day 3: A has 4, B has 3, C has 2, D has 1.
Day 4: They kill themselves.

An anthropologist finds the papers from day 2’s afternoon and the islander’s commentary on it. I’ve bolded the part of the assumption that is actually correct.

D: Either [0000] or [0001].

C: D wrote either [0000 and 0001], or he wrote just [0010]. If the latter, he’ll know he’s a blue and kill himself! If he doesn’t kill himself, then it must be the former two, and…gulp…those both show me as blue. So if he’s alive tomorrow, I’ll know!

B: C either wrote down [0000, 0001, 0010] or she wrote [0100] and both she and D will end it tomorrow. If they’re still alive, then it must be that group of 3 and I’m blue in all those permutations, so I’ll know and have to die the next day.

A: B either wrote [0000, 0001, 0010, 0100] or he wrote just [1000]. If the latter, then B, C, and D will have to die tomorrow. If they don’t then it must have been the former group of 4. In all those scenarios, I’m blue and I’ll have to die with them all on Day 4.

Yes, we are fundamentally in agreement and just quibbling over the words/presentation. However, I’m a little confused by what your islanders are writing in this formulation. For example, when B claims that C either wrote down [0000, 0001, 0010] or [0100], what is it that C is attempting to write down?

The easiest way I find to visualize is as follows: There’s a big multigraph with all the possibilities in it: 0000, 0001, 0010, and so on. We say two nodes are X-neighbors if they only differ on the color of X. For any predicate p on the possibilities, we can form a new predicate “X knows p” (or p for short), with the “truth-table” that p is true at a node just in case p is true both at that node and its X-neighbor.

It’s then easy to see that, just evaluating it via the truth table rules, that [A]**[C][D](there is at least one blue) is false at every node. (Indeed, the truth value of [Y][Z]…[W]p at a node is always independent of the actual colors of X, Y, Z, …, W.)