Firearm Related Studies Vol 2: Investigating the Link Between Gun Possession and Gun Assault

You keep writing about this as if the investigators, once they have the data in hand, have to bring in additional information to then do the analysis. They don’t.

Once the data is collected, it is wrong to say “they would then need to find out how much location explains the difference in their results.” The data would tell them in their data how much variation in caseness the indoor/outdoor variable explained. They don’t have to extrapolate.

You keep going on about the relative percentages, but that is irrelevant to the ability to conduct the analysis. It’s the absolute numbers in the various cells that is the issue. So, if 10% of the controls were outside, that’s 68 people. Low cell concerns arise at 5 or below. A concern would arise here if sparsity among the various combinations of covariates tested in a given model contributed to error in the estimates from that model. There’s no evidence of such problems here.

Specifically they tested how many people lying about having a gun would influence the results, and found this to be 5%. It’s not common for studies to probe “what if’s” about data models with such specificity, but here the authors provide that info. Rather than treat this as good science, useful information, and objectivity on the part of the investigators, you want to use this to reject the results entirely.

Seems like meeting the good faith objectivity of the investigators would be a more appropriate stance to take.

Oh, I’m willing to be persuaded. Unfortunately you haven’t done so because you know, it’s math.
[/QUOTE]

Another point to reiterate -

No single study, in any discipline on any subject, is without limitation. What we need to do is evaluate the results of a given study in the context of all the available research. In that way, any of the challenges to a specific study, any of the random errors that might lead to an erroneous conclusion, can be minimized.

Thus, the key in regards to evaluating this study and its limitations is whether it is telling us something consistent with other studies. And the answer is that it is.

Further, this pointilistic approach - taking each single study and clutching and grasping for limitations in order to claim the entire study null and void - is not scientific. Nor does it suggest that one is open to all possibilities in engaging in debate on the subject.

Hentor, I don’t see the need to bring all this “in good faith” stuff into your discussion. I honestly think Bone just doesn’t realize that as a layman he’s missing a bunch of basic knowledge required to properly criticize a study like this.

Well, I quoted a non-laymans dissection, and Hentor simply blew it off. And, just asking- are the two of you “non-laymen”? I mean, I have a advanced degree in Biology, with three published papers even(of course, I am badly out of touch, not having worked in the field for a long time). But I pointed out weaknesses in the paper, ones which even the author admitted to- and again, Hentor just blew that off.

Bone seems like a pretty smart, college educated guy, who can read a Paper.

Those papers were published as hit pieces- with the idea to support Gun control.

They have been roundly criticised, and not only by those who are gun advocates.

Bone, can you help me out? I literally see no cogent point of critique specific to the paper in question in the quoted material cited by DrDeth earlier in the thread. To me, it was meaningless, in that it made vague claims to having worked in an unspecified fashion in epidemiology decades ago, and then suggested that the easy way to check the model was against reality.

What aspects of that critique do you find to have any merit?

By the way, DrDeth, my h index is 30. What is yours?

More than key. Those deficiencies render this “study” as nothing more than a waste of time and money.

Your cite was a layman’s opinion by his own admission. Yeah, he works with numbers but he knows perfectly well and admits it’s outside his area. Could you spell out to me the “fairly simple way to check the model against reality” he refers to? Because that would certainly settle things.

Eta: and yes, I also think Bone is a smart annd trustworthy guy. That doesn’t mean I am going to take his critique of statistical, medical or physics research at face value.

I have no idea, again, that was decades ago. But again, you simply dismiss the article, you dont raise any objectiosn to any of his points.

1" T*hey mention models, but provide no descriptions of the particulars of the models, nor any parameters. (cf. the section on “Statistical Procedures” in the Methods section, where they describe a series of models and regression analyses, but without specificity.) As far as I can tell, even if someone else had a similar dataset to work with, they would not be able to fully replicate the procedure taken in this paper simply from the paper’s description of its methods. The journal site does not have a link for supplemental materials, so there does not appear to be any more extensive description of data, methods, or results than is present in the paper itself."
*
2. "One thing that stands out as a huge difference between the case group and the control group is location. The case group (the folks who got shot) were outdoors in 83% of non-fatal shootings and in 71% of fatal shootings. The control group, by contrast, were outdoors at the same time only 9% of the time. That translates into very high relative risk, about 50x the control group by simply going outside. If the authors wanted an attention-grabbing lead, the “being outdoors” risk factor is what they should have played up. “Don’t Go Outside” could have been the headline, validating Philadelphia agarophobics."

3*. The raw, unadjusted numbers don’t paint gun possession as being generically risky; in fact, across all cases, it shows a slight association with less relative risk. So it is only through the opaque method of adjustment of confounding factors that the startling relative risk estimates for gun possession come about. That process has a lot to do with how the models mentioned are constructed, and that information is not available via the published paper.

If we assume that all “adjustments” are made to the control data, we can estimate just how much “adjustment” had to occur in order to arrive at the published relative risk numbers for the “gun possession” condition in the three different contexts

Note that the author is a noted expert data scientist, with a BS, a MSCS, a PhD and pages of citations on Google Scholar.

He may not be a expert on epidemiology, but he is a expert on data. And of course, the paper quoted here by the OP is not on epidemiology either. It’s a gun control peice, attempting to use the methodology of epidemiology for a social issue.

So why did your cited expert mention he had to review epidemiological methods to complete his review?

Here’s what he said "
*I’ve read the full paper, and the logic or math is incompletely described by which they arrived at even their numerical results, much less the further conclusions that they take. Of course, I’ve had only a brief time where I was engaged in epidemiology research, and that was over twenty years ago, so I’ve done a bit of review, too. They used a case-control experimental design. Because most of their data is nominal, not numerical, they employ “conditional logistic regression”. They mention models, but provide no descriptions of the particulars of the models, nor any parameters. (cf. the section on “Statistical Procedures” in the Methods section, where they describe a series of models and regression analyses, but without specificity.) As far as I can tell, even if someone else had a similar dataset to work with, they would not be able to fully replicate the procedure taken in this paper simply from the paper’s description of its methods.
*

What’s the simple way to test against reality? ISTM, we don’t need studies if that is so simple.

I reject your reality and substitute my own.

we also have:

http://onlinelibrary.wiley.com/doi/10.1111/j.1745-9125.2004.tb00539.x/abstract
RESISTING CRIME: THE EFFECTS OF VICTIM ACTION ON THE OUTCOMES OF CRIMES
Authors
JONGYEON TARK,
GARY KLECK
*
“Results indicated that self-protection in general, both forceful and nonforceful, reduced the likelihood of property loss and injury, compared to nonresistance. A variety of mostly forceful tactics, including resistance with a gun, appeared to have the strongest effects in reducing the risk of injury, though some of the findings were unstable due to the small numbers of sample cases. The appearance, in past research, of resistance contributing to injury was found to be largely attributable to confusion concerning the sequence of SP actions and injury. In crimes where both occurred, injury followed SP in only 10 percent of the incidents. Combined with the fact that injuries following resistance are almost always relatively minor, victim resistance appears to be generally a wise course of action.”*

and by the same criminologist:

*
"In fact, none of the evidence presented by the authors actually has any relevance to the issue of the effectiveness of defensive gun use, for the simple reason that at no point do they ever compare crime victims who used guns defensively with victims who did not. Instead, they made only the essentially irrelevant comparison between people who were shot in assaults with the rest of the population, noting whether gun possession was more common among the former than among the latter. Not surprisingly, after controlling for a handful of (badly chosen) control variables, they found that gun possession is more common among gunshot victims.

This pattern, however, says nothing about the effectiveness of defensive gun use, but rather is merely a reflection of the fact that the same factors that place people at greater risk of becoming assault victims also motivate many people to acquire, and in some cases carry away from home, guns for self-protection. In sum, this is what researchers refer to as a “spurious” association – a non-causal statistical pattern due to the influence of some third factor(s) on the purported cause (gun possession) and the effect (gunshot victimization). For example, being a drug dealer or member of a street gang puts one at much higher risk of being shot, but also makes it far more likely one will acquire a gun for protection.

Previous published research, however, has directly compared crime victims who used guns with victims who used other self-protective strategies (including doing nothing to resist), and reached precisely the opposite conclusions from those at which Branas et al. arrived (Kleck 1988; Kleck and DeLone 1993; Southwick 2000; Tark and Kleck 2004). Significantly, Branas et al. ignore all but one of these studies, …*

I guess you didnt read his article: Methodologically, comparing gun assaults without shootings to those resulting in shootings could have addressed the issue of whether gun possession changed the risk of being shot in an assault, but no such comparison was attempted

So were the authors of the NEJM paper. None were criminologists.

See, instead of actually adressing the points, you just hand wave and ridicule. In other words- **you got nuthin.

**

Somebody help DrDeth with humor and popular cultural refences.

But I’ve spent too much time (again) in this thread explaining specific issues and answering questions with detail to have someone tell me I’ve got nothing. See ya.

Bone, is this the study from Philly that didn’t bother to exclude felons from its gun possession or victims of gun violence pools, then went and made sweeping associations between gun possession and being a victim of gun violence? If so, it was hilarious the last time it was cited here.

Because non-felons who carry a gun are exactly the same as felons who carry, and totally have the same risks for being a victim of violence.

Criminology has nothing to do with anything.