Hello, yes, I’m an AI researcher that specifically works on systems that can explain why they made the decisions they made. I also have some background in anomaly detection work (read: predictive classifiers failing or giving false/positives or negatives).
This is really an awful idea. All it really does is magnify social biases and causes people to “trust in the unbiased computer” as opposed to dealing with large systemic issues in the criminal justice system.
The biggest issue comes down to training data, you could:
Select real crime statistics, in which case it inherits all the biases in the criminal justice system present. For instance, it could predict person A has a high chance of recidivism because they’re from a majority minority neighborhood (that’s assuming they don’t straight up use inadmissables like race which um… they have in the past). This doesn’t take into account that “recidivism” is only going to register if they get re-arrested, which is naturally going to happen for POC and other groups that get unduly arrested more in the first place.
Make up fabricated data based on crime statistic from some model you’ve developed. Congrats, you’ve just inherited the biases of whoever made the data set.
Use a modification of (1), but somehow label individual instances as “good” or “bad” or tag their “level of correctness” somehow. You’ve basically just now made 1 with the problems of 2 as well, congrats.
Half the time we make a computer vision software it fails because there’s a plant in the background or because someone is black. Criminal sentencing is far too sensitive to overfitting to extremely subtle social biases to be usable in its current form.
Yes, these techniques are maximizing or minimizing some function, but fundamentally overfitting or fitting to data produced by a heavily biased system is a problem. The system just cannot do anything more than reinforce the system it’s trained to replicate, or conform to a specific person or team’s vision of that system.
Overall, things likely won’t change much if we trust an AI system to produce these results, and there may even be cases where a very racist judge is less racist because of the prediction (of course, the opposite is also true). But it really just adds an air of fallacious validity to our flawed-in-the-first-place system.
The wrinkle is that uneven enforcement inflates crime statistics among groups facing stricter enforcement. Having the AI codify that isn’t the most effective.
I agree that is a confounding variable. Doesn’t mean you should just abandon using AI or numbers and go by the gut feeling of a judge or parole officer, though.
That’s really all I’m saying- and it’s really all that most people in favor of these tools are saying. From the outside, it’s easy to see that there would be a problem with your hypothetical “parole officers are harder on Blacks so they violate more often so they are denied parole more often.” I would bet my bottom dollar you never imagined the following absolutely true situation. An officer completed a re-assessment of an offender. The offender had been doing well , and the tool recommended less intensive supervision. The officer wanted to override the tool and keep him at the same level of supervision. When asked for the rationale, the officer replied “He’s not living up to his potential”. That is absolutely not supposed to be a factor - but I’m sure it would have been an unseen factor if that officer was making the decision based on his own judgement.