Evil Ghandi raises some issues:
- He questions the validity of a telephone survey as a tool to estimate prevalence in the community.
Well, this was published in the Journal of Allergy and Clinical Immunology, which is the leading journal in the field. Articles published in it are reviewed by prominent allergists for study design, internal consistency of results, and that stated conclusions are, in fact, well-supported by the published results. This process is known as peer review, and ensures that bad science is not published in leading journals.
Because I work in the field of medicine, I can tell you that coming up with a true population-based estimate of prevalence of a given condition is more difficult than it would appear from the outside. Surveying people who come in for routine physician visits skews the population towards infants & young kids, older individuals, and women. It also heavily skews the population towards people with chronic health problems, or who believe they have numerous health problems. Counting ER visits for peanut reactions would grossly underestimate the scope of the problem since many people treat reactions at home with EpiPens, and never come in to an ER. Hence the idea of a community based survey to get people of all ages represented.
The abstract also acknowledges the weakness of self-reported allergies. They do make an attempt to correct for that by asking a subset of respondents detailed questions about the nature of the reaction to the peanuts, and found that approximately 10% of people reporting nut allergies do not meet diagnostic criteria for allergic type reactions. Their final figure of 1.1% prevalence includes a 10% reduction based on this, and a further 10% reduction based on the other published studies of telephone survey techniques.
In other responses, I also said that a cited 0.7% prevalence figure was also in the ballpark, and I wouldn’t argue about it. The abstract also lists a CI, Confidence Interval as 1.0-1.4%. This means that if they did the same study over again, calling 4374 different households, there is a 95% chance that the final prevalence figure would be between 1.0 & 1.4%.
I pulled the actual article a few months ago when I had a similar discussion on a different message board. IIRC, the time from receipt of the article to publication was 2 or 3 months, which in medical literature is very fast. It was also a featured (headline) article in that particular issue of the journal. The reviewers and editors (who, again, are nationally prominent allergists themselves) thought that this information was a) as good as was reasonably obtainable, and b) important enough to publish quickly and prominently.
Bad science this is not.
- He accuses me of Selective Reporting.
Situation #1 EG reads a web site which gives 2 widely different figures for food allergy deaths. One is ascribed to the CDC (and fits his beliefs). One is ascribed to an Allergy advocacy group (who it turns out is citing information from a published scientific article, but doesn’t fit his beliefs).
There is a true figure out there; one or the other of the published figures will be closer than the other.
EG posts the figure ascribed to the CDC without providing a link to his source, let alone quoting the whole article, so that we can even be aware that there is another figure cited, or independently judge for ourselves which figure is more reasonable. He misquotes the figure, which could be accidental.
Situation #2 I summarize several abstracts. One abstract shows that some people will react to as little as 100 mcg of peanut protein, while other allergic individuals do not react to 50 mg. However, the fact that some people don’t react to larger quantities in no way negates the truth of the statement “Some people react to as little as 100mcg of peanut protein.” Here lies one difference.
The bigger difference lies in the fact that I DID quote the entire abstract, allowing everyone to judge for themselves the validity of my statements. MEDline does not allow linking to search results; thus I quote the abstracts. Because I realize the abstracts are long, dense, filled with jargon, and tedious to read, I highlight certain key statements, and provide a summary at the end so that semi-interested readers can get what I consider the main points. Highlighting selected passages, and extracting selected passages for a summary is fair as long as the all of the source is made available to all readers either by quoting, or by providing a link.