A Study from Northwestern Points to Error rates in non-capital prosecutions

And it offers some support for an argument that I’ve been making for years.

According to this study, due to be released soon, from Northwestern a comprehensive survey of the accuracy of the US justice system puts the accuracy of the non-capital segment of the system under a glass.

The numbers are not good. Overall, they’re claiming an error rate of one in six, with that breaking down, according to the Yahoo News article I’ve linked, with a ten percent chance of a guilty person being erroniously released; and a 37% chance of an innocent being sent to jail.

Bench trail vs. jury trial isn’t that much better, if you’re innocent, either.

I’ve been arguing for years my belief that the one-in-seven error rate claimed for capital cases is likely to be an improvement over the odds given to the accused in a non-capital case. So, a part of me is pleased to have a study done that seems to be supporting my postion: that the error rate in capital case is less than it is in the system over all.

I’m very saddened that the difference seems to be so small, however. I also believe that both a one-in-six error rate and a one-in-seven error rate are far too high. And the more than one-in-three rate for innocents is appalling.

The first question I’ve got is does anyone have a handle on the methodology of this study? I’d really, really like to know how the study determined whether each case studied was in error, or not. Is it simply changed verdict on appeal, or some other criteria? (Call this the OJ test: Would Prof Heinz have considered the OJ trial to be flawed or not, for this study?)

The more important question is what do we Dopers suggest might be done to improve these numbers?

Well, a quick look through Google News got me another article about the study, which actually linked the study - so I’m going to post that link here (warning .pdf)and start reading it, myself.
ETA: If someone familiar with the language of statistics can offer a translation of the abstract, I’d appreciate it.

To be crystal clear, the study seem sto say that if you’re an innocent person arrested and tried, you have a 25% chance (jury trial) or a 37% chance (bench trial) of being convicted.

This is a little different from my first thought on reading the OP, which was that the study claimed that 37% of people were innocent but nonetheless convicted.

For that part to be meaningful, we’d have to have know how many innocent people actually faced this situation.

For example, if ten million criminal defendants were tried, and of them, 100 were innocent, then 37 would be sent to jail. 37 innocent people in jail is nothing to brag about, of course… but 37 out of ten million is a vanishingly small error rate.

On the other hand, if one thousand criminal defendants were tried, and of them, 100 were innocent, and 37 went to jail… 37 out of 1000 isn’t so great.

And is that innocence determined by the judge’s opinion?

What is your sense of the proportion of innocent people brought to trial, Bricker?

Are we talking factual innocence here, or “should have been acquitted because of procedural errors”?

I didn’t read the article in question more than superficially, but how did they determine that the people were actually innocent?

Regards,
Shodan

Can you cite some of these “procedural errors”?

The closer I’m looking at the study the more I’m bumping against my near-total ignorance of higher stats.

I don’t think I know enough about them to be able to even form intelligent questions about how the methodology is supposed to work, or why it’s legit. (I’m not saying I think it’s not legit - I’m saying that it sounds like PFM to me. Which is often simply an indicator of a lack of understanding by a lay person.)

Shodan and Richard Parker, I’m trying to get through the paper - and I’m far, far from being a stats person. What I understand it to have been is a statistical analysis based on how often jury and judge opinions on cases disagree. Please note - this is only a very, very rough interpretation, and I may well be wrong about it. The paper does define several levels of ‘correct’ decisions, and I believe they’re not just going by overturn on appeal, nor by taking the judge’s opinion as “the gold standard.” (a phrase used in the paper.) But I have to confess to being nearly completely lost in their language.

The figures given aren’t quite enough to be sure, but I think that of the 290 in the sample, about 30 were innocent, i.e., about the 10% that troubles Bricker.

The methodology seems to be one based on replication and agreement. On cases where more than two parties judge a case, you can form an estimation of their base error based on agreement. For example, if both the judge and the jury are both 80% correct, then you would expect them to be both right 64% of the time and both wrong 4% of the time so they would agree 68% of the time. There are a few fundamental assumptions here, namely that both parties have equal error rates and they are completely uncorrelated but this represents the best case scenario. ie: If you have a 68% agreement rate, then the best you can assume the error rate to be is 80%. It might well be worse than that.

The unasked question would be, Why do the police and prosecutors have such a high error rate? One would hope that cases only get prosecuted when they believe both that the accused is guilty and that they have a reasonable chance of conviction. Why do they believe so many innocent people are guilty, after what should have been a thorough investigation?

Giles, first off - both the police and prosecutors get very used to having inconclusive evidence for many crimes. And repeated lies from, often, everyone involved with an investigation.

This seems to have the inevitable effect of leaving them in the position of trusting their intuitions or feelings, often the face of eyewitness accounts, and other supposed evidence. And that, I think is an insidious slope to start on: the more one is used to having to discount evidence that doesn’t agree with the conclusion one wishes to prove, the easier it can be to discount evidence that contradicts that conclusion - no matter how concrete that evidence might be.

If you watch some of the true life crime shows, the DAs involved often make points of having to go back and keep checking their assumptions all along - because they know they can suffer from what they like to call “tunnel vision.”
And that explaination is the relatively innocent one.

Then there’s the Nifongs, or cops, looking to close high profile cases for the betterment of their careers - no matter who gets hurt in the process. And, for all the publicity and censure Nifong is suffering, I believe that there are far more who have found that it’s a tactic that works well.

You might find some of those answers, anecdotally, in this poster’s experience in being arrested for a crime he didn’t commit. I encourage you to read all his subsequent posts in that thread for the complete picture. Pretty scary stuff!

You need a cite defining procedural error?

Regards,
Shodan

With the proviso that this is my sense, nothing provable or definitive…

…pretty damn small. I do not believe I had a factually innocent client during my years as a PD, although I had plenty of clients that legally should not have been convicted for various reasons.

So my own, somewhat jaded, somehwhat out of date experience is that the number of actually, factually innocent people accused of a crime and making it to trial is very near zero. Zero plus epsilon, let’s say.

Thanks for the answer. I have more “personal sense” questions if you don’t mind.

Did you generally try to ascertain the guilt or innocence of your clients?

Do you think that the low number of innocent people at the trial stage is a product of good police procedure or good use of prosecutorial discretion? (And to the extent that it is both, would you estimate what percentage make it to the prosecutorial discretion phase?)

You say you never had a factually innocent client as a PD, does that mean just those clients who made it to trial or all clients (including plea bargains and those the prosecution decided not to indict)?

No. I was much more concerned with what would or would not come up at trial.

Both. I’m sure if my ambit had included misdemeanors for which no jail time was involved, there might well have been more instances of factual innocence – picture, for example, the cops giving an under-age drinking citation to all teens at a party, even the one that wasn’t touching the hooch – but since those offenses could not garner jail time, they were not eligible for PD representation and I never saw them. But when serious misdemeanors or felonies were charged, the cops usually had their ducks in a row…

And I’ll offer one other thought. Remember that as a PD I got cases in which the accused was unable to afford a lawyer. Perhaps if I had been in private practice I would have seen somewhat wealthier clients, who were factually innocent and outraged at being accused. The folks I saw were “in the life” so to speak, and I seldom represented even a first-time offender.

I can’t really offer a percentage of “prosecutorial discretion” vs. “police decision” - I can only say that if I had ever found something that convinced me I had a truly innocent guy, I believe I could have shared it with almost every prosecutor I ever worked against and gotten a nolle pross.

All clients, although I often wouldn’t get them until they were indicted, so it was rare to have a client that was facing a grand jury.

Interesting. Thanks.

If I understand this correctly, you’re saying that you could have gone to the prosecutor and said “Hey, I really think this guy is innocent”, and he would have dropped the charges. Is that correct?

The method was to interview judges and find out in what percentage of the cases the judge disagreed with the jury’s decision.

On the assumption that both can’t be right they asssumed that there must have been a mistake made.

Bricker’s hypothetical:

concerns whether or not the sample represents the whole population. If it does then the sample percentage of error represents the error of the whole population of those who have been tried within some margin of error and at a specified confidence level.

For example if the error rate in a sample of 100 is 0.16 then the you can say with 95% confidence that the true error rate is between 0.09 and 0.25. What this means is that in making that statement you will right 95% of the time in that the true error rate will in that region. The rest of the time the error rate will be outside that region and you will be wrong.

If the sample is representative then the claim that in a million trials there were only 37 errors can’t be right. The number of errors would be 160,000, or actually most likely somewhere between 90,000 and 250,000. The authors of the study did say that their sample was not adequate to represent the whole population of trials in the entire US. However it does indicate that there is a problem that really ought to be investigated.

Oh come on, you’re not even trying anymore…

No, not quite. If that were true, I’d still be practicing criminal law and making a small fortune!

If I had ever found something that convinced me I had a truly innocent guy, I believe I could have shared it - whatever it was that convinced me - with almost every prosecutor I ever worked against and gotten a nolle pross. It wouldn’t be me saying, “Hey, trust me,” but rather me saying, “Look at this,” or “Listen to this.”