Not hyperbole at all. The N.R.A. Reply to Sandy Hook was two-fold:
- Shut down all electronic access out of sheer terror over culpability.
- Recommend loaded guns in every school lobby in America.
THAT is why the debate is a mess. Re aim your finger…
Not hyperbole at all. The N.R.A. Reply to Sandy Hook was two-fold:
THAT is why the debate is a mess. Re aim your finger…
Allow me to adjust. I was imprecise. The N.R.A. severely limited electronic/ social media access but did not completely cut itself off..
My misstatement.
I totally know next to nothing about that there “social Medea” stuff, though I think its poorly named. But does the NRA have a Facebook page?
Is that wrong?
So, in other words, not a triumph. They did not act triumphant. To describe them as “triumphant” would be an error.
Oh not in the least. Their lunatic solution involves hundreds of thousands of guns in close proximity to small children with literally millions of live rounds in schools on a daily basis.
No. I was correcting a fact that was erroneous.
I think you are missing the point. Safety classes don’t stop murders. They stop accidental/negligent shootings.
So if I understand you correctly, Kable is correct but the authors of the study determined that the margin of error was wide enough that there is a chance that the lower homocides in homes with long guns might mean nothing.
I don’t know how useful the information would be. As Kable points out, there are all sorts of geographic and demographic differences between people who own long guns and people who own handguns, or nothing at all.
And how is that “triumphant?”
You do not understand correctly.
I’m frankly shocked that someone who was in such a fucking tizzy about “correlation not equaling causation” (need I remind you about your repeated assertions that nobody in GD would even make such a mistake, and that the CDC itself was guilty of doing this) would think that a non-significant association should be interpreted as anything!
That’s like saying a lack of correlation equals causation. It’s a more fundamental error than the one you thought should rightly lead to the defunding of a major research and public health institution.
Jesus. He is the one that wants to interpret a non-significant univariate effect. You cannot push for the interpretation of a univariate effect and then demand that one understand that “all sorts of geographic and demographic differences” might explain a particular relationship.
You both demonstrate a significant ignorance about the fundamentals of science. It’s particularly striking from you, because you’ve been posing as one very keen to make sure that the highest caliber of interpretation of data should be the gold standard. Now you appear to be arguing that a parameter estimate can be divorced of its confidence interval, and that one can consider the independent relationship between two variables without accounting for other factors. (Except, only when it suits your argument.)
I certainly would never say anything like that. Because sure as shit somebody would say “What the fuck are you talking about, 'luc?” And then I’d have to post something about taking my slide rule into the repair shop so I won’t be available for comment until everybody forgets I said it, then I have to feed the cat…
This just makes me sad. Even idiots don’t deserve to have their kids die. Or to live with the guilt of having caused it.
“Deserves got nuthin to do with it.”
If you point a loaded shotgun at a kid’s head and pull the trigger, the kid’s death is a foreseeable circumstance. If this is the sort of random act of God accident that could happen to anyone, it seems to me that is proof that the average person is not competent to own a gun, since all it took for this guy to not kill his kid was to not point a loaded gun at the kid’s head and pull the trigger.
Then can you put it in a nutshell for a layman? What did they find about long guns in the home? And why is the confidence interval important in this case?
I thought Kable was trying to undermine the notion that long guns (even scary black rifles) are superdangerous.
Why, sakes alive, I even hear someone talk about those scary black guns I get the vapors so bad I could just faint dead away and simply ruin my crinoline!
I haven’t read the paper, but Kable was basically trying to brush off statistical analyses that controlled for other factors. It’s really a pretty fundamental thing he’s missing.
Incidentally, the same sort of reasoning can be pitched in a pro-gun way. The subset of gun owners who have taken a gun safety course recently (possibly a proxy for concealed carry) are more likely to engage in risky boneheaded behavior like drunk driving and binge drinking. But after controlling for basic demographic variables the effect goes away: CCW users are statistically the same as non gun owners. The underlying effect is that white males are more likely to engage in such boneheaded behavior and they are also likely to fall in the CCW category. Discussion: http://boards.straightdope.com/sdmb/showpost.php?p=14223178&postcount=148 Cite: Garen J Wintemute, Injury Prevention.
Terminology:
The scientific concept is “Controlling for confounding factors”. The statistical method is “Multivariate analysis”.
Just to be crystal clear, Measure for Measure is entirely correct in what he says, but it is ultimately irrelevant to the issue here. This is because long guns were not significantly associated with homicide even at the univariate level.
To go back to his example, it would be like finding that ice cream consumption is NOT correlated with heart attacks. There’s no need to examine temperature to explain the relationship, since there’s no relationship to begin with.
This is frustrating, because I’ve spent so much effort here already explaining these things in layman’s terms. I’m tempted to ask you to specify what it was about my previous explanations that still eludes you.
However, I’ll give one more try to answer your questions.
Among 31 such bivariate relationships, Kellermann and colleagues provided the odds ratio for rifles and shotguns. These were 0.8 (0.5 - 1.1) for the former, and 0.7 (0.5 - 1.3) for the latter. The number in parentheses is the confidence interval. (More on that momentarily.)
What gun advocates like to do with this is say that since the estimate for the odds ratio is lower than 1.0, this means that rifles and shotguns are protective against homicides in the home. (I have to be careful in writing about this, because Kable has in this thread proven to be a disingenuous fuck, and will cut quotes from context and then portray them as saying something they do not.)
An odds ratio of 1.0 is referred to as the zero effect, because that means that there is no association present whatsoever. It’s akin in that sense to a correlation of .00.
You can see that we can be 95% confident that the value for the two categories of long guns includes the zero effect. This is equivalent to saying that there is no significant relationship between rifles or shotguns and homicides. Alternately, the authors could have reported the p-value to indicate significance, but the confidence interval conveys information that readers often prefer to see.
No. He was trying to argue that long guns were protective, and more specifically, that an anti-gun research reported that long guns were protective.
He said:
He got this from gun strokers and repeated it here. Just like you got the idea that CCW people never break laws from other gun strokers and repeated it here, or that there was a CDC report that reported correlation as causation or that only CDC funding was targeted.
It’s hard to remember that you are the ones with the facts on your side when you keep sharing clearly false facts with us.
It makes some sense to me that households that only have rifles will have a somewhat lower rate of homocide because there is some geogrpahy and demographics that come into play that explain the lower homocide rate and the higher rate of rifle ownership.
So this is the part that confuses me. If the value for rifles fell between 0.01 and 1.01, would you still say that because the confidence interval includes the zero effect, that there is no significant relationship? Is there a special definition of significant that you guys use?
then why did he follow up with statements about how the difference could be attributed to the demographics of likely rifle owners versus liekly pistol owners?
You mean like your side claiming that the NRA has shut down ALL reasearch into gun violence? Or that the second amendment was really put in there by slave owners who wanted to protect their proerty rights with slave patrols? Or that assault weapons are machine guns? Or that an assault weapons ban would make any discernible difference in gun violence? Or that “well regulated” means regulating firearms. etc.
That’s fine as an opinion. THE DATA IN QUESTION DO NOT SUPPORT THIS HYPOTHESIS. Your opinion is nothing but pure supposition on your part that runs counter to the data in this study. Until you can find some evidence that tests such a hypothesis and confirms it, you have to acknowledge this.
First, I’m not sure why it is confusing. Thinking about it from just a commonsense standpoint, if you couldn’t tell whether doing something made any difference or not, wouldn’t you be likely to say, “I cannot tell a difference?”
There’s no special definition of “significant” - it is an empirical determination, not a judgment call. There is a special cut-point that we consider to be the acceptable maximal threshold to determine significance, and that is .05. As I said before, what this means is that we accept that random chance could be the reason for the magnitude of the relationship we are observing in fewer than 5 out of 100 random draws of a sample. Researchers may adapt a more stringent cut-off of .01 or .001 if they like, but if they instead chose .10 or .25, they would not be able to publish.
It should make sense why this is so.
Yes, especially given such a huge range. Imagine that I’m asking you to invest $100K in my business, and I tell you that I am 5% sure you won’t lose money. It doesn’t seem like you’re missing out if you take a pass, does it?
From a statistical perspective, when we assume the null hypothesis (that there is no relationship), and our stats say that we are 95% confident the relationship could be zero, we do not reject the null hypothesis.
By the way, a broader range of values, like what you suggest in your post, means a much less precise estimate or some problem with the stats. Since the possible range of OR runs from 0 to infinity, and 1.0 is the “zero effect”, your example is saying that you are confident the value falls somewhere within the entire potential range of values for an inverse relationship (a huge band of values). In other words, “No shit, sherlock. Tell me something I don’t know.”
A more precise estimate would be that the value falls somewhere within a narrow band, and if that narrow band excludes the zero effect, doesn’t it make sense that it more convincingly demonstrates some meaningful relationship?
You’d have to ask him. My belief is that he made a false statement about what the Kellermann paper said and has spent a bunch of energy since then trying to make it look like the data could be interpreted in the way he would like.
You seem to confuse facts with opinions, and to include statements that nobody has made among your facts. Not very impressive, Damuri Ajashi.
I’m still confused, it may be a misunderstanding of how the confidence interval works. If I tell you that I am 9%% sure taht the actual value could be anywhere from .01 to 1.01 doesn’t that mean it is much more likely that the actual value is less than 1? Or can we say nothing about the distribution?
These are all things your side has said. You may not have been in all those threads but my side had to fight a lot of ignorance early on in this debate.
Well, again, the example you’re using is a bit like saying “I’ve done a lot of research, and I can predict with 95% confidence that an average American’s annual salary falls somewhere between $0 and $1 million.” Really? Boy, that’s some valuable research there, pal!
But that’s just due to your example being so bad. If I understand the point behind it, you are asking something more like “If the 95% confidence interval just barely includes the zero effect at one end, can’t we kind of think that our estimate is probably different from zero.” And in that case, you kind of can, in the same way that if you were using p values, and came up with a p-value of .07, you might be tempted to say “Yeah, I know it doesn’t meet the established criteria, but maybe perhaps it means something.”
It’s just a little more difficult to do that with the 95% CI, because it’s easier to eyeball the CI to see that it does or does not include the zero effect than it is to more precisely see how far from .05 the p-value would be.
It’s also not relevant to the case of the Kellermann paper, since the 95% CI for rifles and shotguns also includes values indicating 10% or 30% increases in the risk of homicide when they are in the home. They’re not just barely non-significant, in other words.
Bottom line, if the CI does include the zero effect, you cannot say it is significant.
Furthermore, those effects that are just barely significant at the univariate level are almost always explained away entirely when other significant variables are also included in multivariate testing.
Thanks for the clear explanation of statistical analysis.
Of course, you’ll have to put this in a youtube video (with puppets) before Kable will understand it.