What odds would it take for you to bet on Trump winning reelection in 2020?

Indeed. Make it 2-to-1 and I’d have to do some soul-searching. As is, I’d bet nothing at 1-1, several $1000’s at 5-1, and at 20-1 I’d bet enough that my kids would be set for life if the bet came in. Of course I might need a medical team to induce me into coma as my conflicting hopes would be too traumatic to cope if conscious!

Probability estimates always reflect one’s ignorance. As an extreme example: I just flipped a coin. What is the chance it came up Heads? You can do no better than to estimate 50%, but I’m looking at the coin: my estimate will be either 0 or 100%.

For something as complex as Trump’s election, you can fight that ignorance by studying polls or whatever, but whether you study a lot or only a little, you will have, in principle, some probability estimate that reflects your ignorance — ignorance about future events, about even the likelihood of those events, and ignorance of present-day data which is knowable but you don’t happen to know.

Even your notion of “error bars” here is, at best, very ambiguous. What you might speak of is a probability distribution of what you’ll guess the re-election chance to be 3 months from now. That distribution will have a mean (which will be your exact estimate today) and a standard deviation — is (twice) that standard deviation what you mean by “error bars”? Note that you need to specify what (future?) pdf you’re basing these “bars” on. Three months from now? Six months? Assuming you consult with top political thinkers? …

… And the huge uncertainties make it very difficult to estimate. What are the chances of recession? Catastrophe in Syria? Et cetera? But these uncertainties don’t lead directly to any notion like “error bars.” Your probability estimate will reflect your uncertainty. … And, as I demonstrated in a recent thread, a thought experiment can be devised where a whimsical Bill Gates can effectively force you to come up with a probability estimate!

Spoiler: Gates pays you $500,000 to announce a probability estimate; you must fade $100,000 on whichever side Gates takes with the odds you quoted him. To refuse to make an estimate would be pass up the $500,000.

Let’s see, should I bet on a horse race a year away when I don’t even know which horses will gallop along? :smack:

Sorry, too premature for rationality. But since I have an extra MX$200 bill here (worth about US$10) I’ll go for 1:100. Won’t retire on my winnings but I can buy bus tickets to Tehuantepec.

But you need to distinguish between ignorance of the probability space, and ignorance of a particular outcome, which are quite different things.

If I have perfect knowledge of the probability space, i.e. I know the parameters of the distribution of the random variable, i.e. my model for what is happening is perfect, then my probability estimates will be perfect. You will not be able to beat my by making better probability estimates, you will only be able to beat me with prior insider knowledge of a specific outcome. I will always offer 50% implied odds, and you can’t make money by betting against me unless you cheat and look at the outcome of a coin flip before making your bet.

But if my knowledge of the probability space is imperfect, you can beat me by making better probability estimates. If I think the coin is fair, but you know it’s biased, I will make incorrect 50% estimates and offer you 50% implied odds, and you can bet in the direction of the bias. You will make money in the long run, because your probability estimates are better, not because you have prior knowledge of any specific outcome.

No, the notion of error bars makes perfect sense here. It means you lack confidence in your understanding of the probability space, you don’t know if model has interpreted the significance of all these factors correctly. It makes perfect sense to say that you can still make a probability estimate from all the information available, but you lack confidence in how close that estimate is to the true probability. If I’m watching a Rugby World Cup match, when I know nothing about rugby, it makes perfect sense to say that at half time both me and my rugby-playing friend might estimate that South Africa have a 75% chance of winning, but that I have far less confidence in whether my estimate is accurate. My friend might be willing to bet with someone at 75% implied odds; I would not.

I’m not going to wade through that thread, but obviously whether it makes sense to offer a bet to Bill will depend on whether you are confident that the error bars on your probability estimates are small, i.e. you are sure that you have accurate knowledge of the probability space.

I don’t like to go out of my way to take bets, so despite the fact that 5:1 would be more than fair odds, it would still not entice me to pony up the dough. If the bet seemed up front I’d take 20:1 though but wouldn’t lay down over $100 on it because they could still renege.

If we believe in determinism then an omniscient being would assess the probability as either 0 or 100%. Without determinism we still suppose that the best-informed player will have a better estimate than us. Is this what is meant by “error bars”? An estimate of the deviation between our guessed probability estimate, and that of a hypothetical best-informed player? Comparing our estimate with some better estimate is what I wrote about in my post.

Assuming determinism and that our estimate is 40%, we already know that the deviation from an omniscient guess is either 40% or 60% exactly!

If we reject determinism, we’ll never know what a “best” estimate would have been, but can watch estimates evolve over time, or as one’s information grows.

You don’t need to wade through anything to understand the thought experiment: the tiny paragraph at the end of my previous post gives the gist.

Your estimate is your best effort based on your own information and ignorance. You’re hoping that you’re better-informed than the bookmaker, but in any event you have some probability estimate that dictates how you should bet. (If you guess that the bookmaker, or prediction market, is well-informed you can use that as additional information to modify your own estimates.)

You are depicting the prediction problem with a perfect model coupled with random variables. But the boundary between model and the randomness will always be fuzzy. In the simple coin toss example, suppose I’ve already flipped the coin but haven’t looked. Does your “perfect” model judge the Head to be 50% or to be {0 or 100% but don’t know which}? Does it matter?

Suppose you think a key variable is whether Trump is addicted to uppers; you assess his re-election chance as 20% and 60% for the two cases; you judge the chance of the addiction to be 50%. Ordinary probability arithmetic means you think he is 40% (.5×20% + .5×60%) to be re-elected. In this example saying “40% chance” and saying “half the time 20%, half the time 60%” are two ways to say the same thing! Sure, a better-informed player (i.e. one who knows whether Trump is addicted) would have a better estimate, but that’s my point: A player’s probability estimate is determined by his ignorance.

In the drug-addiction example, you’d prefer to have inside information but in any case can compute your chances according to your own knowledge and ignorance. (You’ll also want to guess the chance that the bookmaker has the inside drug-addiction info, and adjust your own estimate accordingly.)

What especially matters is how your prediction efficacy compares with that of the counter-party to your bet.

In any event, we seem to be in agreement that “error bars” reflect the deviation between your probability estimate and some pdf. In my post I argue that the specific pdf needs to be defined. But you postulate a “perfect” pdf. If you reject my reductio ad absurdem via determinism (where the perfect estimate is either 0 or 100%), surely you’ll agree that even a near-perfect pdf cannot be attained by human.

Instead of the arithmetic above based on Trump’s present drug addiction (a knowable), suppose you think an earthquake (unknowable) next summer is the key variable. The arithmetic still looks the same.

But at some level, Riemann, you can’t separate knowledge of the probability space from knowledge of the outcomes. If all the coins in the world are fair, then 50-50 is the correct odds for a coin flip. But if all of the coins in the world are unfair, but distributed evenly in which direction they’re unfair, and neither of the bettors knows what coin is being used, then 50-50 is still the correct odds.

Other answer.

I consider myself a patriotic American and support the form of government we have. Trump is the exact opposite of the person who should hold the office he is currently tarnishing. Nothing would convince me to vote for that SOB.

I think Trump has a 50/50 chance of winning. It is going to be another extremely close election.

It’s not like you’d be voting for him in this scenario. If you bet enough it may put you forward past the tipping point, where you’re actually hoping he’ll win :eek: but that’s different.

Alan Lichtman recently said it’s too soon to call, and I’m inclined to agree. If no change occurs to the economy I’d say we’re looking at a coin toss.

But this poll isn’t asking you to vote for Trump, or support him. It’s just asking how likely/unlikely you think he is to win reelection, such that you’d bet money on it.

You can bet money on the Houston Astros winning the World Series without liking or supporting the Astros one bit.

I’d take Trump in the +130 to +140 range.

The range of odds in the poll goes so far away from the true odds that I took it be asking how large a payoff above the true odds would you need to be offered to be willing to take a financial interest in an outcome that most of us find distasteful.

The first consideration is whether the counterparty has insider knowledge of the specific outcome that you’re betting on. Assuming he does not, that you have equal knowledge, it’s your confidence in your model - i.e. the error bars around your probability estimate - that determines whether you should be willing to bet with someone. And I think you need to be much clearer on separating these two aspects.

Again, you seem to be confusing perfect knowledge of the probability space (the model) with knowledge of specific outcomes. Your “0% or 100%” scenario is not a reductio ad absurdum, it’s the realization of one particular outcome in the probability space. For a fair coin, 50%-50% is the perfect model. After the coin is flipped, the probabilities haven’t gone to 0% or 100%, rather you have realized one outcome in the probability space. We don’t talk about probabilities for outcomes that have already happened. The notion of probability requires true uncertainty - a random or effectively random distribution, with unknown future outcomes that are a function of the parameters of the probability distribution (our model for estimating probability). Randomness is intrinsic to the concept of defining a probability space or a probability distribution.

Again, to be crystal clear, there are two distinct types of uncertainty here.

(1) In principle there exists a perfect model which can give perfect probability estimates. A perfect model does not mean post hoc knowledge of specific outcomes. It means perfect knowledge of the parameters and shape of the probability distribution, perfect knowledge of the probability space. Assuming that all parties have equal knowledge, they will have models to estimate probability. For casino type betting, perfect models are well known and available to everyone. For real-world sports and politics betting, the situation is far too complex to have a Hari Seldon style perfect model. So there are good models and bad models. If (say) we don’t understand a sport very well, we should expect that our model is not very good, so we should have low confidence in our model of the probability distribution - hence uncertainty about whether our probability estimate is accurate. This is what Sam was talking about earlier with wide error bars on our current estimates for the election.

(2) Uncertainty about future unknown outcomes that is attributable to the true randomness (with something like radioactive decay) or effective randomness (unknown or unknowable complex deterministic factors) that’s intrinsic to the concept of probability.

You jumped from even odds to 5 to 1, that’s skipping where the grand majority of votes would fall. I’d take 3 to 1 odds.

Thank you, Riemann for summing that up…

There are different schools of thought on this but I don’t think there’s anything immoral about taking a bet that has good odds and only pays out if something bad happens.

Insurance companies are pretty good at not doing this, but if someone offered to sell me $10 million worth of fire insurance on my very-much-not-worth-$10 million house for $1, I’d take it, and I wouldn’t feel bad about it.

There are definitely cases where it can introduce a conflict of interest or moral hazard (it’s probably a bad thing if my house is worth more to me as a pile of cinders than a house), but this doesn’t seem like one of them. If I knew how to influence a Presidential election, I’d already be doing it!

Making a way in the money bet that Trump will be elected is just bad-President insurance sold at a discount.

I think 5-1 is about right. He barely squeaked through last time and he’s gotten less popular since, so if he was a 2-1 underdog going into last Election Day, he’s probably at least 3-1 now. Then you have to consider the possibilities of impeachment or death.

For those who, understandably, don’t want to bet on Trump winning, you can just reverse it and bet the other way. So, would you be willing to invst $5 (or $10, $100, etc) in order to earn $1 if anyone BUT Trump wins in 2020?

As stated earlier in the thread more than once, market odds are 1.43-1 against Trump (midpoint), i.e. you only have to risk $1.43 to make $1 if you back anyone-but-Trump.