So suppose we’ve got a Cauchy random variable, with density function f(x) = 1/(pi*(x[sup]2[/sup]+1)). The expectation of this variable is undefined, as the integral of x/(pi*(x[sup]2[/sup]+1)) over the real line does not converge. How should this be interpreted? I’ve always taken it to mean that no matter how large a reading I get on this variable, I can expect to see a larger reading in a finite number of trials. Is this correct, or even close to justifiable?
Truth is that I’m not so up on technical math lingo. I don’t see how you’d get an initial “reading”. But I would say that the value is larger than any conceivable value that you might assign to it.
According to my probability textbook (Jim Pitman’s Probability), the quotient of two standard normal variables has a Cauchy distribution. So there’s how I get an initial reading, and all subsequent readings.
The reason I ask is due to the Petersburg paradox, which I saw tonight on this page. There’s a good description of it there, so I’m just gonna cut and paste:
I think that my interpretation fits there, so I wondered if it was more generally applicable.
Slight hijack: the resolution of the St. Petersburg “paradox” is based in reality, not math. In other words, there are always real-world constraints not mentioned in the problem. Everyone knows that no casino has deep enough pockets to pay off on such a bet if the coin came up tails for a while. Since every real-world casino can only afford to lose so much, any similar bet offered in the real world would have a finite expectation.
Lewis Carroll offered a variant of the Petersburg paradox where a debtor managed to perpetually put off paying his debt by offering the bill collector twice as much if they collected it the next day. The next day, he did the same thing, and so on…
I don’t understand the “problem”. The infinite expectation is a result of the infinite amount of coin flips that are being accounted for, not the amount that is won with each flip. Am I missing something?
I’m not sure your “expect to see a larger reading…” statement is true, but I think I see where you’re going.
The interpretation I’ve had (of the expected value being undefined) is that for any finite “sample mean” you cannot expect the sample mean to converge to anything as the number of samples increases. It just drifts around. This is because the probability of getting an “outlier” is sufficient enough to jerk the sample mean anywhere at any time.