Absolutely. The least they could do would be to have the same scale range, even if they differed as to how the score was calculated. Then at least a 700 on one scale would compare to a 700 on another as giving the same information.
They seem to like the gradations given by a roughly 600-point spread. Why not use zero to 1000?
I’m wondering if these numbers weren’t supposed to, originally, inversely related to the default rates that qualified applicants would have.
For instance, under old FICO, flipping the scale, the odds of a 350 defaulting on a mortgage would be 65%, while the odds of an 850 defaulting over a 30 year note would be 15%.
Right.
And I think that when it went into service, 30 year notes were an uncommon product. I don’t think that an 850 had a double-digit default risk on a 7 or 15 year note back when they set the scale up.
Real puzzling. Might be interesting to track down a retired FICO employee and ask WTH they were thinking. It doesn’t seem like it’d be a trade secret at a lot of firms…
That’s an interesting idea, but it doesn’t really seem to make much sense. As you note, a “perfect” score of 850 having a 15% default rate on 7, 15, or 30 years seems ungodly high. I would assume low single digits for a score like that. I’m having difficulty finding a good table of scores and risk, but I’d guess a 15% default rate would correspond to scores in the mid-600s.