there’s no cute game matrix, sorry. but you can draw your own and fill in if you wish. I didn’t run out all the lines because the lines ran out.
Let us suppose that we will attempt to manage the uncertainties which trouble you by assigning what we take to be managable probability ranges and then apply the more likely of those ranges to the benefits, costs, and potential detriments to see if there isn’t an optimization algortithm we can apply so as to make a decision in the present more rational.
For the sake of argument, let’s pick the temp. increase at 2100 for our initial marker, from which we will extrapolate consequences retrograde so as to bring within more easiliy foreseeable range of time the proposed onset of the particular harms. so that the efficacy or contrary of any proposed amelioration can have at least some parameters for us to play with.
Further, let us pick as anthropogenic, and therefore potentially subject to amelioration via some societal effort, a component ranging from 100% to 0%.
It will be generally true, whatever the particulars of the amelioration might be, that the “low hanging fruit” will be cheapest, which is to say that most of our systems being so chaotic and inefficient, the first ten percent of effort will generally yield (arguendo) 30% of the benefit, with diminishing returns from there so that often to make an amelioration covering the last 10% of a problem it is likely to double the cost of the venture, that is to say, the cost of the last ten percent equals the cost of the first 90% improvement,…
Whatever the costs of any proposed amelioration, of course, they must be considered inlight of the possible detriments, reduced, if you wish, to present value to the extent possible.
Now, one of the problems is that in guessing how bad the shit will hit the fan if , say, the rise in 2100 is 10 degrees and we do nothing, is that if we err on the downside in our estimate we embrace the outlier risk if it is catastrophic.
what is mean is , look at Katrina and the Tsunami. (the Tsunami, of course, unrelated to global wartming, but I suppose it would have been worse if it started out with sea levels 10 feet higher, no?)
so the problem is, that things can get real exagertrated, if you het an 10,000 year event trifecta. There is, I think, no doubt that whatever vulnerabilities we have to a repeat of katrina somewhere else in the ntion (and I believe there are several candidates, incoudihg the sacramento delta (key word, delta, as in missippipppi delta) and in Bangle desh the Ganges delta.
Anyway, Katrina is a like 250 billion dollar beef. So if you say, well, the outlier temp rtise will give you a 30 foot sea level rise (covering among other things the southern part of florida frm tampa south…) and the middle number gives you 15 feet, we can say that the projected (and already borne) costs look fierce.
Then to the human contribution. Unless there is 0 human contribution, there is some component of the problem that is subject to amelioration, at some cost or other. If, arguendo, the rise were ALL anthropogenic, and we set out to get the easy 30% of amelioration (everybody drives hybrids…) we would expect that however bad the shit would get, we would have cut it back 30%, (big savings, ). Of course, this payoff drops if the human contriution is only 50%, but still, we get a 15% reduction for what is generally 5% of the cost of getting to 100% .
With that matrix, we can try to analyse the cost of being wrong. That is to say, we can be wrong by spending money we don’t need to, because the problem is not as bad as the lost resources; we can be wrong by spending money on the wrong thing, because the problem is as bad as we thought, but the spending we picked to do croweded out other spending we needed to do. or we can be wrong by not spending and the problem is real bad and some spending would have produced some amelioration.
If we are going to be wrong because we didn’t spend money that would have been efficacious, the detriment has the potential to be virtually infinite, and the benefit to us of not spending the money, is, just that, we don’t spend the money.
Such a benefit demands that we pause for a moment and think about things we ARE willing to spend money on (like 2 trillion on Iraq long term…)
So the imputed benefit to the gdp, lets call 3% to pick a middle number,. and see what kind of reasonable demands we will place on the amelioration matrix if that would be a hipshot cost. (which I concede only arguendo, because there’s lots of work to the contrary, but anyway, we move on.)
If we’re going to be wrong by not spending, we get the 3% growth, but we get some multiple of the number of Katrinas, so it’s not going to require a whole lot of anthropogenisis and efficacy to pencil out that this would be a bad way to be wrong.
Now, turning to being wrong by spending. (trying). This, of course, is the only way to get to the two bads–spending when it turned out to be unnecessary, or on the wrong thing.
If EVERYTHING we do is completely wasted, because we are terminally stupid, we have lost 3% gdp and we have as many katrinas as we would have without the spending.
If we use the 15% amelioration for a benchmark, and assume that we pick only half the right spending, we are still looking at 71/2 % amelioration of some pretty hairy shit–in other words, at some point of foregone gdp, namely the amount necessary to get to the low hanging fruit, any sort of prudent analysis dictates at least spending on SOMETHING…
Having decided to spend, if we are to be wrong because we spent, but on the wrong thing, it is important to notice that this unpleasant eventuality only can happen when the problems are bad, but we made matters worse on ourselves because we limited our overall spending and let something that we recognized as potential amelioration slide for budge reasons. This is a flexible screen that is susceptible of political modification–ie, we can decide to do both if we are scared shitless.
Anyway, the cost of being wrong this way is less catastrophic than the cost of being wrong by not spending, because since SOME of our early spending is going to work (even if ALL is not) and the consequences of 0 amelioration are so catastrophic, that it seems from a game point of view that we should not let the possiblity of being wrong by spending, but on the wrong thing, stop us from immediately going after the low fruit, and continuing to analyse vis - a vis the spending fo rsandbags, (dykes?) down the road.
to be unwilling to take a present gdp hit for some future benefit (via foregone costs of catastrophe) requires, it seems to me, a very sanguine view of the range of bad outcomes from unabated global warming, and an overly pessimistic view of the potential efficacy of social change, particularly the easier parts.
In order for that not to be a very bad way of being wrong (since your benefit limit is the foregone cost-3, 5, hell 10% gdp) the sum total catatrosphes circa 2100 would have to be pretty modest. Unrealistically so.