Amazon's Face Recognition Matches 28 Members Of Congress To Criminal Mugshots

The results of an ACLU test (a test of the technology, not of Mark Twain’s statement that America had no distinctive criminal class except Congress):

That’s all?

This sounds vastly better than human beings. For example TV shows a photo of a criminal police are looking for–and there are reports of him from all over the country.

Well, there’s also this. Not only did the program misidentify (identify?) Congresscritters as criminals, but:

They put in a chart showing that 39% of the false matches were Congresscritters of color, while People of Color actually only make up 20% of Congress. In other words, the software appears to be biased against people of color, misidentifying them almost twice as frequently as you would expect by random chance.

This is a cause for concern.

I’d say that’s accurate. Oh, wait, you mean having already been arrested (& convicted.) :rolleyes:

Although I don’t support many of the ACLU’s causes, I have to tip my hat to their use of interesting strategy.

This article seems to imply that they adjusted the confidence threshold down to get matches.

I do not think that these results implies that the software is biased against people of color. The software matched the politicians headshots against publicly available mugshots. I would assume that if POC are arrested in higher numbers (for racist or other reasons), then there would be a higher likelihood of getting matches.

For example, if they matched the congresscriters to 5000 mugshots of only white arrests, they would get a disproportionate number of matches to white congresscritters.

Now maybe the systematic racial disparities in the justice system could result in disproportionate numbers of POC mugshots, which then lead to POC getting more matches, but that does not imply that the software is biased.

I interpreted silenus’s comment as jokingly remarking that it only found 28 criminals in Congress?

Hmm – so it identified twice as large the percentage of people of color as criminals than they represent in the group as a whole, and you’re excusing this?

I’ll grant that it might not be a case of bias in the software proper, but if they database has a much larger group of black criminals than white criminals, you would expect the same thing to happen with any group sampled – the odds of a black person being misidentified as a criminal would be disproportionately higher. To the person misidentified it wouldn’t much matter whether it was because of the “mechanical” operation of the software or to the improper application of the system with what is essentially a biased pool of suspects – they’re still more likely to be fingered.

THAT points to a problem.
And mind, you, it’s still not clear to me if it’s a software problem or a problem of skewed input data. The one thing that’s clear is that this system, as is, seems not to be street-legal yet.

I didn’t say it was right. I didn’t say that it was any consolation to the misidentified person.

I am just saying that just because a tool gave an outcome that appeared biased does not mean the the tool was biased. Inputs matter!

As someone who sometimes works with models, don’t confuse a bad output with a bad model. There is a difference.

Well, you know all those darkies look alike.

(note this is severe sarcasm with a touch of tasteless humor and should only be read as such…)

They’re raising concern about the technology, but really we should realize that eyewitness identification is shit when humans do it, computers can’t be worse. As far as the science, data source into the system is already biased. They need more data to really draw strong conclusions.

Bishop is accused of corruption, and Gianforte guilty of assault.

That would seem to depend on what higher numbers means, i.e. whether you mean higher absolute numbers or higher percentages.

Suppose, for example, that POC are 30% of the population but 50% of the mugshots. That would be a higher percentage of POC in the mugshots, but wouldn’t create a higher likelihood that an individual POC matches some mugshot.

This is expressed offensively, but the possibility that there’s more (or less) variation in appearance in Caucasians as compared to other ethnic groups shouldn’t be dismissed out of hand.

It’s generally observed that there is more variation among human phenotypes in African populations (i.e., Africans tend to look less alike than populations that migrated out of Africa).

Cite: Genetic Variation and Adaptation in Africa: Implications for Human Evolution and Disease - PMC

The classic racist claim “all those people look alike” is a side effect of biased observation. Different populations tend to vary in different features. To make up an example because I am too lazy to look up specifics, Europeans might vary more in eyebrow shape and Africans in ear shape. A European is (subconsciously) looking at eyebrows to tell people apart whereas an African is looking (subconsciously) more at ears, so each has a somewhat harder time distinguishing the other’s variation precisely because they are used to concentrating on the cues that vary most often (among their own population) when they are identifying people by face. It’s very efficient to concentrate on those cues, at least until you encounter someone from a different population with different cues.

Also worth noting: as a broad generalization, scientists expect to find the most genetic diversity among the ancestral population. A classic example is apples – the greatest genetic variety of apples is found in the Caucausus mountain region…where apples are believed to have originated.

Thus, finding greater genetic/phenotypic diversity among Africans (as we have observed) is consistent with the hypothesis that humanity originated in Africa. Not proof, but suggestive.

I don’t think it makes the claim you assert. “genetic and phenotypic” don’t necessarily mean appearance.

But even if it did, it’s not clear that this would apply to our instance. It’s possible that within the continent of Africa there’s more genetic variation, but the group of Africans which emigrated to America is predominantly from one part of the continent and therefore shows less. And even within Africa, it’s also possible that some of the variation is confined to relatively small groups, e.g. “Pygmies”, or Khoisan, which are of relevance to anthropologists but not so much to ordinary people trying to recognize individuals, very few of whom are from these groups.

Not to discount what you’re saying, which may well be a part of it. Personally, I think something else is an even bigger factor - when you’re less familiar with something you tend to focus less on minute details because these are overwhelmed by the bigger distinguishing features that you’re unfamiliar with, as compared to those who you’re more familiar with. (It’s for this reason that outsiders tend to see more familial resemblance than family members themselves, or that people tend to think pictures of themselves are imperfect while others think they’re good representations.)

But all that is neither here nor there. Bottom line is that even if there are legitimate reasons for people to incorrectly see more variation among their own group, that doesn’t mean there isn’t also more variation among one group than the other. So when we’re talking about whether AI also sees it that way, we need to consider that possibility.

FWIW, a couple of articles from Cecil Adams on the subject.

https://www.straightdope.com/columns/read/92/how-come-white-people-dont-all-look-alike/

Obviously facial recognition software isn’t ready for roll out. It’s deeply flawed and unreliable.

I’m not sure the experts get it right 100% of the time.

I recall controversy over a newly discovered Billy the Kid photo. Some “experts” say it’s him and others say no.

It’s easy to exclude people from a match. Eye sockets are too far apart. The other features on the skull aren’t spaced correctly.

Declaring a match with certainty is harder.

I’m dating myself, but the thread title immediately brought to mind the Beyond the Fringe Great Train Robbery sketch. Specifically (about 3:35):

Facebook’s face recognition consistently thinks my son is me. It tags pictures of him as me. I have to remove the tags all the time. So it’s easy for me to see that this software is not very robust.

Hopefully the courts take notice of this and don’t allow this nonsense to be accepted as evidence.

They allow eyewitness evidence, so evidentiary standards are obviously not based on reliability.