I think you will agree that the results of the Anderson study (the main source used in article) indicate a much higher rate of misattributed paternity than you suggest. In studies where there was high confidence on the part of the mother the median rate of non-paternity in the studies was about 2%. The equivalent figure where fathers were not confident (paternity tests) was around 30%. It is important to note the on whose confidence these figures are based, since the economy of knowledge is far from equal by gender. The lower figure is for studies where the women themselves are confident enough about the paternity of their children to go ahead with a genetic study, after being counseled that the study will uncover paternity issues. They are given the choice of gracefully exiting the study if this may be an issue. Most women have far better information about the likelihood of mis-attributed paternity than men (who they have mated with, whether they were in their fertile period, whether they were on some sort of birth control).
This would indicate the general rate in the population is somewhere in between, but closer to the upper figure, given the economy of information.
The information asymmetry isn’t really important for extrapolating the results of the studies to the full population. What’s needed is a good estimate of what percentage of births fall into that high-confidence group. And according to the Anderson paper, there are no good studies to base a number for that off of. So there are some large error bars, but really no good reason to think that the real rate is >10%, as the rumors and legends claim.
You are correct that women have more advantage than the men when guessing at paternity. Only the woman can be confident in the number of sexual partners during the time in question, the prevalance of birth control, etc.
Anderson agrees that the number is somewhere in between the two rates. The point of the two categories was to give some kind of bound. Thus the high confidence and low confidence categories.
As for how to determine what the rate is in general, that would require more information about how the above confidence categories relate to percentage of population. That is information we are not provided.
Obviously, the most unbiased studies are those that use random groups and make it socially awkward for the participants to back out. I could see where medical people during the search for a donor don’t really want to get in the middle of family secrets and statr fights…
I have heard the story about the English school teacher who gave the blood-typing assignment to his students and found a 30% rate of pedigree errors- but I have never seen anything about the original source for that. It sounds like a candidate for Cecil or Snopes, since I would think the odds of some random guy matching a blood-type test (A,B,O, +,-) would still be about 30% or better - So unless the town Don Juan was a rare A+ (AA++ alleles) you would be hard pressed to get 30% based on blood type.
OTOH, when they tested some descendants of Thomas Jefferson and of Sally Hennings(?) they found a match for some Y chromosomes but not others -which could also have an alternate explanation over 200 years and how many generations.
it would also be interesting to see whether size of the town/village/city would have an effect on ths rate.