Yes, but that’s not what the quote I was responding to said.
Unicode support doesn’t come for free, has to be implemented across a wide range of products, and has to deal with third-party systems that include non-standard encodings, don’t support both byte orderings or all the UTF word sizes, with a variety of line endings (both standard and not)–and that’s not even counting that many of these systems don’t handle surrogates and think it’s “OK.” Or the “Unicode” fonts that support only odd subsets but are in standard use anyway and need to have characters substituted from other ones – again, across hundreds of products.
I’ll admit I have no cite for it, but I’d be stunned if my billion dollar estimate wasn’t low – this is the foundation on which all localization is made possible; character encoding issues are at least 2% of my work, and I’m not in a particularly text-oriented part of the company. We’ve got libraries to standardize this stuff, of course, but no company is an island (especially companies that make web browsers), and we have to work and play well with others, including the vast majority of software developers who don’t have the resources to even understand all the nuances of unicode, much less the time and dollars to implement them.
The unicode base specification alone is over six hundred pages long, and that’s not counting the almost four thousand pages of amendments, proposed changes, exceptions, and committe recommendations that have been adopted by various organizations.