Math question re: wandering randomly over a given area.

All right. For those of you who have been waiting patiently (and silently), I have calculated the expected number of transitions before the robot hits the given point.

For those who absolutely can’t wait, it’s 343. More generally, for an n x n grid, the expected number of transitions is 1+(n-1)(n-2).

“Well, that’s fine and dandy,” you say, “but what’s a transition? And where did you get these numbers?”

In order to explain, I need to first document my assumptions, then explain the Markov chain I created.

Here are my assumptions:[list=1][li] Both the initial point and the terminal point are on the interior of the grid.[/li][li] The line containing the initial point and the terminal point is not parallel to either side of the grid.[/li][li] The robot has a 1/4 chance of facing in any of the four cardinal directions at the beginning.[/li][li] The robot turns left with probability 1/2, and right with probability 1/2.[/li][li] The number of steps that the robot backs up is a uniform random variable over the set {1,…,19} (of course, this set has only integers).[/list=1]I don’t know that this is exactly what the OP coded up, but these assumptions made my analysis a lot simpler.[/li]
So as I said earlier, I was using a 6-state Markov chain to model this behavior. The states are as follows:[list=1][li] The robot is at the initial point.[/li][li] The robot is on the west wall.[/li][li] The robot is on the north wall.[/li][li] The robot is on the east wall.[/li][li] The robot is on the south wall.[/li][li] The robot is on the terminal point (think of it as a pit into which the robot falls).[/list=1]A transition is just a change of state. Not all transitions are going to be of equal time, but that doesn’t matter for the Markov chain. If someone is bored, they might want to consider the expected time needed to go through 343 transitions.[/li]
I’m not going to try to code up the matrix of transition probabilities and make it look nice; rather, I’ll just give enough information to recreate it. I should add for those of you who are unfamiliar that the probabilities must add up to 1, so any unstated transition probabilities are equal to 0.

From state 1, the robot has a 1/4 chance of reaching each of states 2, 3, 4, and 5. From state 6, the robot has no chance of ever leaving; it stays in state 6 with probability 1 (in this case, we have certainty. ;)).

When the robot is on the wall, the situation is a bit more complicated. It has probability zero of either staying on that wall or returning to the initial point. It has probability 1/(n-2) of reaching the opposite wall, and probability 1/((n-1)(n-2)) of entering state 6. Its probability of reaching either adjacent wall is 1/2-n/(2(n-1)(n-2))). It is left as an exercise to the reader to derive these; one might be tempted to use conditional expectations.

From here, it’s a simple matter to calculate the expectation given at the beginning. One simply need know what the fundamental matrix of a Markov chain is, how to calculate it, and how to read its entries. Of course, the best resource for this is Google.

So that’s my analysis. Questions?

Well, it’s the rare poster who knows what my name means. :slight_smile: Truth be told, I chose it mainly cause I figured it would be available. I’ll let you read my analysis and tell me what you think.

To tell you the truth, I wasn’t really looking to spend that much time on the problem ;). That said, I’ll give you what thoughts I did have.

I initially thought the natural way to model this with Markov chains would be to use 1600 states (:eek:): one for each position on the grid the robot could occupy (400), times 4 for the different directions the robot could face.

Then I thought this could be reduced, since there are really only two places where there is a decision making process (how many steps to back up from the edge, and then which direction to face afterwards). So then I thought you could reduce it to 80 states: which wall you’re facing (one of four), times 20 (for which of the twenty spots along the wall you’re in). Each transition going from one wall to the next, covering whatever squares inbetween (since you keep hitting a wall after each pair of decisions).

I’m not sure how you reduced it to 6 steps, to me it seems like information is lost if “facing the west wall” is a single stage, without knowing where on the wall you are. I’m certainly not saying it’s wrong offhand, it’s entirely possible you caught a shortcut I simply didn’t see.

Dammit, the computer screwed up my :eek: smiley.

In fact, thinking a little more about it, I believe I do now see the approach you took. It’s not really relevant where along the wall you are, the target is simply to the left or right of the robot, and since, once the direction has been decided upon, the robot continues in that direction until hitting the wall, position along the wall isn’t relevant after all. I didn’t bother to work out the algebra myself, but this certainly seems to be a valid approach to the problem.

I certainly hope it’s valid. I’m not doing a 1600-state Markov chain. And even if I were willing, it would be incorrect. The state would not only have to represent what point the robot is at, but also how it got there (because the robot can only turn once when moving from wall to wall).

Depends entirely on how one interprets the OP:

There’s a serious ambiguity here about “back up a randomly chosen number of steps.” For instance, one could argue that javaman is trying to say back up a number of steps drawn from a uniform distribution of numbers between 1 and 20. (As far as I can see, what follows goes through, with suitable modifications, for other reasonable interpretations of this point; this is the simplest.) So javaman’s procedure is equivalent to: pick a random cell/intersection on your current row, go there and randomly turn direction. But this is equivalent to: pick a number of steps from a uniform distribution of numbers between 1 and 20 and walk to there in the current direction, applying cyclic boundary conditions (i.e. if you walk off the grid, you reappear on the opposite side), then randomly change direction. Standard results about random walks are stunningly independent of the form of distribution you assume for the step size and apply nicely. The probability of reaching any point is 1.

Although this procedure (given a resolution to the ambiguity) is exactly equivalent to the procedure refering to reflections for this particular 20x20 case, they can generalise in completely different ways. In particular, a random walk on an infinite plane grid with a step size randomly between 1 and 20 (uniformly) is entirely covered by the standard theorems.

Which is not to suggest that I shouldn’t have skimmed the OP quite so quickly.

Yeah, that’s right, I didn’t stop to think about that extra information needed for the 1600 case.

This has actually turned out to be one of my hotter threads.

To those of you who ask why I didn’t run a search on “Random Walk Problem”, or “Markov chains”, please bear in mind that I don’t have a lot of training in mathematics, and those types of sites tend to scare away those who don’t.

But I do wish I’d learned more, and am always willing to try now.