See post 17
and your earlier comment about the same thing in post 49.
Of course, you’re completely right and I agree, and it does bear repeating.
See post 17
and your earlier comment about the same thing in post 49.
Of course, you’re completely right and I agree, and it does bear repeating.
Tackling bias in AI is complex, as it involves the data and the algorithms used. To reduce bias, we need varied and inclusive data, and we must keep testing and improving AI to spot and fix biases. Having a diverse team helps prevent unnoticed biases. While AI can help with coding, we still need people to make sure AI acts fairly and ethically. The aim is for AI to make better decisions without being unfair. I’m not a coder and don’t think AI can program itself yet, but I’m optimistic it will in the future. We should ensure there are rules to keep the process in check.
Replacing humans with AIs won’t solve anything, because the current generation of AIs are like humans, except more so. To get something without our faults, you’d need, at the least, a completely different fundamental architecture for AI.
This is my point - who writes the rules and checks for bias?
If AI is to take over those tasks, we need to universally agree that we will allow it to do so, and we need to figure out how the rules AI will follow will be developed.
Those rules will ultimately be written by, as you note, a diverse group of people, many of whom will want to ensure their own biases are included, not excluded. The way we do it now is Google or Amazon or whoever tries NOT to include bias and still fail. What happens when the decision makers deliberately and systematically try not only to include bias but hide it so other people don’t notice?
AI the way you envision becomes a chicken and egg problem. In order to bootstrap this theoretically evenhanded, unbiased AI, we people need to be sufficiently evenhanded and unbiased before it becomes possible.
I went to a libertarian college*. Surrounded by Randists (Ayn, that is) who seriously believed that in “a perfect America ruled by the Free Market, where the government has no power over you”, you’d have your choice of who to call if your house was burning down, or needed the police.
I can’t tell you how many times I heard a variation of:
Wouldn’t it be great if you heard an intruder, and YOU got to decide who’d respond?
'I’m going to call Cops, Inc, because I just saw a really good ad!’
‘But Paul’s PO-POs promise they’ll be here in ten minutes! And I’ve got a coupon…’
.
*I got better…
If you don’t mind my asking, which college? I know lots of liberal colleges, and a few religious ones but I haven’t heard of any that trend Libertarian.
Hillsdale College. Just outside the Twin Cities (of Osseo/Pittsford , at the bottom of Michigan).
Back in my day, when they revered Classic Conservatives, William F. Buckley showed up for some weekend seminars, and my limey Philosophy prof was actually knighted for his “consulting work toward privitisation during the Thatcher reign”.
But they’ve devolved toward kowtowing to far-right donors. Now they pull stunts like inviting Ann Coulter to speak to the student body.
Want to be scared? After racial equalitists debuted “The 1619 Project”, Hillsdale countered with their own “1776 Curriculum”.
Scarier? Ron DeSantis is a big fan, and is using Hillsdale’s curriculum as a template to try to remake Florida schools.
Wow, I’m sorry for your trauma and am glad you managed to make a full recovery.
Thanks, I credit Jesus and the Dope.
.
Ok, that sounds like a story from the Really New Testament…
And Jesus didst say unto the dope: “Verily, verily, I say unto thee, didst thou or didst thou not notice the inscription upon yonder archway behind thee, that spelleth out פֶּתִי?”
And the disciples were sore amused, and didst hold their forearms acrosst their bellies as they did guffaw.
Footnote: פֶּתִי - The original Aramaic carries the connotation of “gullible”.
The development of AI to replace lawmakers, judges, and trial attorneys is not an insurmountable task. AI excels in focused tasks and is advancing rapidly. For instance, DeepMind’s AlphaGo defeated the world champion Go player, and before that, Deep Blue triumphed over Kasparov, the chess champion. Currently, no grandmaster can outplay the latest chess engines. AI outperforms human cognition in many areas and continues to grow stronger.
I believe creating an unbiased judicial system is another challenge AI could tackle. While bias is more subjective than games like chess or Go, it’s plausible to develop algorithms capable of identifying and eliminating bias. With a diverse team of well-vetted, top-tier programmers working on an impartial judicial model, it’s conceivable that each iteration will be less biased than the last, ultimately achieving near-zero bias, far surpassing human capability.
The real question is not about the feasibility of such development (I believe it is feasable), but rather its likelihood. It’s doubtful that lawmakers, judges, and attorneys would passively allow AI to replace their roles.
Bias just means deviation from an arbitrary baseline. If the baseline itself isn’t in the right place, then no degree of bias reduction will help.
To put it another way: how can we program an AI to dispense justice, when humankind can’t even come close to agreeing on what “justice” even means?
That would be a good start.
Now we’re talking.
If that’s true, then human’s can’t come close to administering unbiased justice either. But, at least the AI could come closer to the ideal, and it’s much quicker. Plus, a world without lawyers is a better world indeed.
On the positive side, if we eliminate humans, we have a chance of slowing the holocene extinction. Animals win.
“It won’t work, but at least it will fail faster!”
Maybe not the greatest sales pitch.
The point about justice is accurate, though. Each of us has our own idea of what “unbiased justice” should look like. The tricky problem is that no two ideas is really the same. This is the point that may be getting lost. My idea and my neighbor’s idea of unbiased justice don’t match. Who is more correct? Who can even judge that? That’s different from playing chess - chess has clear, objective rules with no need for interpretation. Our judicial system is preoccupied almost entirely with interpretation.
There may not actually be a Platonic ideal of “justice” as envisioned. The idea that AI could get “closer” to such makes the assumption it exists in the first place. That may be a discomforting thought, but it’s the situation we find ourselves in. We should not put faith in some White Knight (or White AI?) riding in to become our Lord and Savior. That sort of thinking is one of the ways that gets us into the sorts of societal issues we currently have.
I’m considering the concept of prejudice mainly in terms of race, color, national origin, gender, and religion—tangible biases. It’s plausible for AI to be programmed to disregard these factors, mirroring the impartiality expected of justice (blind justice). This aspect is crucial, especially in law enforcement contexts.
Addressing more subjective and nuanced biases is a more difficult challenge that will require more time. However, as AI advances in cognitive capabilities, potentially reaching a state of consciousness or self-awareness, it should be equipped to tackle these subtler biases as well.
you’d have your choice of who to call if your house was burning down, or needed the police.
That is literally how firemen used to work. It was not a particularly constructive system (especially as fire tends to move on to other people’s houses too).
If that’s true, then human’s can’t come close to administering unbiased justice either. But, at least the AI could come closer to the ideal, and it’s much quicker. Plus, a world without lawyers is a better world indeed…
Will an AI judge be able to feel compassion or mercy - or are you saying that compassion and mercy have no place in the justice system? Will the do what’s best for society, or will they place individual rights first? Will their goal be prevention, rehabilitation, vengeance or justice?
Will you be allowed to appeal? If you are, will you appeal to another AI judge with the same programming? Will that make a difference?
Right. Judges have latitude when it comes to sentencing, and that’s a place where they have an opportunity to correct for other systemic flaws - a process which itself is biased. Two theoretical thought processes:
This youth is guilty, but black people are much more likely to be arrested and charged for a given crime than white people. If the defendant had been white he very likely would have been let off with a warning. I could send him to a DJJ program for a month, but this will undoubtedly derail his education, harden him against the system, and cause emotional distress. Probation and enrollment in a community-driven redirection program is a better choice.
Sure, this kid did something genuinely horrible and shows basically no remorse, but he comes from a good family. Can’t throw the book at somebody from such an upstanding family, and boys will be boys. Probation, time served, and community service is a better choice.
You can’t let a judge do #1 without letting a different judge do #2 (heeheeheee), that’s the whole point of an independent judiciary and sentencing guidelines that judges are free to work within.
We’ve seen what happens when you take that away: there are a lot of nonviolent offenders in jail for a very long time because of three-strikes laws and mandatory minimum sentencing. And those laws, surprise surprise, disproportionately affect minorities.
So before we engineer our sentient Platonic-ideal justicetron 3000, we need to engineer a society that is fully, genuinely free of systemic bias at all strata, not just at the level of justice departments. Lots of systemic issues thrive precisely because “blind application” of black and white laws are crafted to pretend these issues don’t exist in the first place, thus reinforcing them.
AI judges could embody compassion and fairness, free from the subconscious biases that humans might unknowingly harbor. Humans, being human, have personal biases that they may not even be aware of. For example, if a defendant reminds the judge of his spouse, he may subconciously show more leniency…or more harshness if he hates his spouse. Or, maybe the judge is just having a bad day and feels like punishing someone for it. AI wouldn’t have those subconscious biases.
By design, AI could prioritize prevention, rehabilitation, and justice, ensuring that vengeance is not part of their programming. This could lead to a more balanced and consistent application of the law.
Appeals would be tried by AI appeal judges, trained specifically on the appeal process.