Although interesting, the links only partly answer the question–in particular, the experimental results are a reasonable counterargument.
To be clear, though, “range expansion” is neither dishonest nor strategic in the normal sense. It doesn’t require anyone to lie about their preferences.
The range is arbitrary to start with–what would a “true” 99-point candidate even look like? An imaginative person might realize that a truly ideal candidate is easily 10 or 100 times better than what we normally think of as a solid candidate. That doesn’t mean we should always score near the bottom, just because there is some theoretical candidate that could be much better.
So it’s just common sense that voters should, at the very least, score the worst candidate at 0 and the best at 99. They can–depending on their preference for honesty vs. strategy–alter their positioning of the middle choices, but the endpoints should be at the extremes.
And I think this leads to bad outcomes. Consider 10 people voting on candidates A and B. 4 people prefer A, and vote A=99 and B=0. The remaining 6 vote A=0 and B=99. The B voters win, both with this scheme and a normal first-past-the-post scheme.
Now C comes into play. A voters hate him, and give him 0 along with B. But the B voters are split: half think C’s a naive idealist, and the other half think B is just maintaining the status quo. We end up with 3 people giving B=99 and C=10, and another 3 giving C=99 and B=10 (A still gets 0). These are honest votes: both groups really like their preferred candidate, while also indicating their preferred order for the less-favored candidates.
A then wins with 396 points vs. 327 for the others. It’s just the bad outcome that we’d like to avoid with a new voting scheme. I don’t think my setup is too artificial, either.
IRV at least would give the election to B or C. Perhaps counter-intuitively, it’s the A voters that will likely decide which of B or C gets elected, but maybe that’s not so bad.
As I mentioned, the experimental results are somewhat convincing, but not completely so. I think there must be a learning process involved where people learn to maximize the utility of their vote while still being honest. And that necessarily means range expansion for the endpoints, but I wouldn’t necessarily expect everyone to figure this out right away.
With enough truly strategic voters–that is, ones that realize giving their less-preferred candidate a “token” score instead of the maximal 99–one can avoid the bad outcome. But again, isn’t that what we’re trying to avoid? We want a system where honesty is rewarded.