Predictive policing. Good, bad, or other?

LAPD officials defend predictive policing as activists call for its end

Simply put, the computer provides a list of areas predicted to have higher rates of crime. The same as an officer working a beat knows that certain streets will have more drug dealers or street walkers, just with more data points and a larger variety of types of crimes expected.

The counter argument is that it disproportionately targets non-whites. ie. It’s racist.

The counter-counter argument is that the high crime areas are non-white so the crimes are committed by minorities.

etc. etc.

Another point is that any kind of predictive policing is that it’s very susceptible to confirmation bias, or maybe self fulfilling prophesy. You expect there to be more crime so you look harder for crimes being committed so you find more crimes being committed.

Thoughts? I’ll be back later with more input.

It’s a computer. How can a computer be racist?

Hmm, good question. Now, after some looking, I found this argument, which I find somewhat compelling:

Another point is that any kind of predictive policing is that it’s very susceptible to confirmation bias, or maybe self fulfilling prophesy. You expect there to be more crime so you look harder for crimes being committed so you find more crimes being committed.

Source

What if the computer was taken out…

And what if the question was turned around a bit…

Should police scrupulously avoid patrolling known high-crime areas?

The problem is you’re giving the innocents in those areas much less of a chance to make it out if you do this.

You can’t make much of an argument for upward mobility if you’re intentionally neglecting areas that need police presence the most.

This is simply applying math to best address problems. Are cops supposed to treat all districts as if they are the same?

To use the BLM analogy, if one house is on fire and the others aren’t, why would you want the fire department to douse them all down with water equally?

It is just data. Like any data, it has to be interpreted by people that 1) know what they are looking at and 2) know what to do with it. Police departments–and the unions that ultimately control them-- have not convinced me that they are ready for 1920, much less 2020.

This is, IMHO, the key thing. This is a potentially important tool. Anyone who trusts the cops to use it well and fairly is really, really not paying attention.

But by this logic, the police forces should just be disbanded and sent home.

As long as they are a presence, they may be a deterrent and that’s not a bad thing. If they start arresting people because they looked like they were about to commit a crime, then that is a bad thing.

Absolutely true.

Because a computer just runs the program that the programmer put into it.

If the programmer is completely neutral, then the programme will be, too.

But if the programmer has biases (conscious or unconscious), then the programme may have those same biases.

For example, face recognition software that is being used in airports to help check people’s records, have increasingly been shown to be much better at identifying whites and males - because the largely white, male software developers “trained” the computer algorithms with pictures of white males.

That means that white male travellers tend to sail through places where those computers algorithms are used, no trouble.

And white women, get pulled aside a bit more than white men. And people with darker skin of both genders get pulled aside a lot more - not because of their personal circumstances, but because the algorithm can’t identify them, so it means a person has to do it. And that causes delays to those travellers, concerns about racial profiling, and so on.

If it were entirely a self-fulfilling prophecy, there would be no way to tell which were the high-crime areas, since anywhere the police were concentrated, they would find crimes. So it can’t be a self-fulfilling prophecy.

The predictive software does not rely on arrests to determine where crimes are being committed more frequently, but where crimes are being reported, whether cleared by arrest or not. So it is not the perception of the police that is being fulfilled, it is the perception of the public.

Regards,
Shodan

I would be interested in seeing several test scenarios run with the data.

  1. Run the analysis but don’t publish the results until after the period has expired. See if the areas predicted to have higher incidents of property crime actually did.

  2. Run the analysis, but then publish altered results where lower risk areas are identified as higher risk. See if the the arrests are significantly higher than the actual prediction.

And that’s precisely where the racism problem can come in. It’s not raw data of actual crimes, but what people report–what is actually noticed. And we already have been seeing lately how some white people seem to misreport black crime.

In other words, it can basically only strengthen existing biases–unless the model provides a way to learn by having surveillance in unexpected places, and a way to make sure that all crimes, even those not deemed to be prosecution worthy, are entered.

Misreporting a white crime as a black crime isn’t going to change the pattern. The algorithm is based on total crimes reported, so saying “a black guy stole my car” when it was really a white guy still results in a report of a crime and has the same level of effect in causing the area to be labeled as higher crime and more patrols.

Cite.

Regards,
Shodan

Computers can be racist, even when the programmer was not.

The way that most problems like this are solved are by looking at historical data. The historical data is based on what cops have done and, if those cops are racist, that can skew the computer’s handling. If the cops have always patrolled region X five times more thoroughly than region Y and only found twice as many crimes in region X as in Y, that tells all of us that the increased rate of crimes being discovered may be influenced by patrol routes, but the computer is probably not going to get the data about patrol routes, it will just have a list of GPS coordinates of reported crimes and it will assume that this data was produced through wholesome means.

There’s a similar case, but slightly different case, with sniffer dogs. They suffer from more false-positives with people of color than white people, because the dog will have been trained (unconsciously) by its handler to not just react to the smell of drugs, but also to people of color, the body language of the handler, etc. If the handler is expecting the dog to tag someone, then that person will get tagged. Obviously, the dog’s not racist, it’s just following the training regimen it was given.

That said, a programmer or mathematician can also make sure that they get data that won’t be influenced by the cops. Called in reports of crimes - as opposed to those that were discovered by a cop on the patrol - should be relatively trustworthy. Homicides are trustworthy, since it’s not like the cops are producing dead bodies or covering up others. You can ask for information about patrol history and correct for differences in how neighborhoods are being treated. And it’s possible that the people who made the application did do their homework and tried to protect against bias in their input data. Assuming that they didn’t isn’t reasonable.

It’s also worth noting that reality is racist. If you want to buy drugs, you might well drive down to the black neighborhood to scout for a seller, because that’s where you would expect to find it. Subsequently, there’s a market pressure for drug sales in black neighborhoods. There could be a history of racism in the country that prevented black people from being able to get gainful, honest work in respectable professions. And so that turned a larger percentage of that population to crime and, from there, it’s part of that community’s culture. They have the business connections, they have the training from their parents, they have institutional knowledge. It propagates forward through the generations, even past where there is any need to continue it. (Not to imply that most or even many members of those neighborhoods are criminal. Most people are law-abiding even in the worst communities, so far as I’m aware.)

And, yes, you can have cases where you’re being harassed by the cops and distrusted by society because of the color of your skin or the neighborhood you live in, and people figure that they may as well misbehave, do drugs, commit crimes, etc. because you may as well get the benefits of a criminal life if everyone’s going to treat you like you’re living one anyways.

Assuming that crime won’t happen in mostly black or Latino neighborhoods is unreasonable, and it’s just getting people hurt and killed to send the cops out to suburbia just because you want to be racially sensitive.

At the same time, harassing people for things they didn’t do may just perpetuate the problem further.

But then again, if you have enough people watching the right spots at the right times, then that community might find that there’s simply no opportunity to commit misdeeds, and they’ll have to start living clean lives. Or maybe you catch enough of the bad eggs that you can pull them out of the population fast enough that they don’t perpetuate the criminal culture.

Fundamentally, there’s probably no good and genial answer for this on the policing side of things.

The only research I’m aware of which has helped to solve the issue is to try and distribute low-income households out into the middle and high-income neighborhoods, so that the kids are going to school with suburbanites.

But even there, you can say that you’re destroying the culture of that community and just forcing everyone to be “white”.

There’s no easy answers. No matter what you do, there’s an argument that you’re being immoral and oppressive. At least with numbers, like total number of people shot and killed in town, you can be certain that you’re improving the world in at least one way.

I don’t think the computer is predicting crime in the future where there hasn’t been any before. Its merely noting where crime reported by the citizenry (and some crime detected by the police themselves) has occurred in the past. It would be a poor allocation of resources to require the police to spend as much time patrolling areas where there is little or no crime (historically) as where reported crime is high. If the program determines that most robberies take place on weekend nights in a certain area, that’s when and where you put the cops. To do otherwise would be borderline nonfeasance. Once there, they still need to respect the rights of people they encounter.

Sage Rat, I agree with your post, but I think that your last sentence is just a tad … ambiguous. :wink:

To be clear, shooting and killing people is good. But only in town. You are only to murder-kill animals and space invaders if you are a country dweller.

Just in case that wasn’t clear.

:wink: