It’s interesting to note that in a static model, the value of the credit enhancements (debt tranches, CDO-squared structures, etc.) absolutely pencils out. Problem is that relatively minor deviations in underlying default rates, or in the correlation among those securities, can cause huge problems.
One example in particular sticks in my mind. If you have 100 securities in a CDO, doubling the default rate (from 5 to 10%) increases the probability that the 90th tranche will default from 3% to 55%. However, if you repool the 90th tranche of 100 CDOs into a single CDO-squared, an increase in the default rate of the underlying securities by just 1% (from 5% to 6%) increases the chances of default for the 90th tranche of that CDO-squared from once every 10,000 years (at a 5% default rate) to once every 6 years (at a 6% default rate).
Which feeds into the point that hogarth made - that as you create more and more complex investments, the added complexity creates it’s own risk, in that there’s more room for the risks to be misunderstood, and/or more heavily assumption driven.
[I had to plow through a bunch of mathematical investment valuation stuff for actuarial exams - things like Black-Scholes et al - and I was struck by the fact that however theoretically sound these models were, they were very heavily dependent on assumptions that were extremely unlikely to be known with any degree of certainty. And yet, my understanding is that these are very widely used IRL.]
BTW, it should be added that it’s possible that looser underswriting standards contributed to the problem in an indirect way. In that it’s possible that by making houses too “affordable” and bringing more people into the housing market, it pushed up the prices of houses to unsustainable levels, and thus contributed to the bubble, which caused the crises when it burst.
By contrast, if in Canada underwriting standards were tighter all along, they may never have seen the same increases to begin with, and thus there was less room to fall on the way down.