FWIW I identify as paleo-conservative/libertarian, the latter admittedly as a beautiful ideal that is probably unattainable, and I have been lower-middle class my whole life. Ironically I have been the beneficiary of a generous social welfare state and in a pure you’re-on-your-own society would probably have starved to death. Yet if it ever came to that pass, while it would suck to be me I still wouldn’t say that I deserved charity just for existing. I accept that whatever I’ve received has been a pure gift. I certainly wish that I were far more self-sufficient than I’ve been able to be.
Who picks the team leader?
How is that not just going to end up with people making AI who share the biases of the people who voted them into that position? How do you avoid the problem of people voting for judges who will ensure their biases are encoded into the AI?
That’s not “irony,” that’s “hypocrisy.”
Glad to hear that you’re heroically 100% true to your ideals no matter what the cost. You’re an inspiration to us all.
Well, that’s the thing about principles, isn’t it? If you only hold to them when it’s easy, you never had them in the first place.
That’s easy to say now, when you have had the benefit of not starving to death. But if you had been starving, would you still have shrugged your shoulders and said that you didn’t deserve help?
Presumably you work? Pay taxes? Spend money at businesses so that they can pay their own employees?
A state with a strong social welfare infrastructure isn’t doing it because it believes in charity and gifts. It does it because it recognizes that productive human beings are its single most valuable resource, and indigent human beings are a massive drain of resources no matter how hard we try to ignore them.
You weren’t given a gift. You were invested in.
Also, strong safety nets (and the Affordable Care Act and bankruptcy law) encourage risk taking (and job mobility). Government-as-a-stodgy-bank funds a safety net to avoid indigence (and to avoid weighing down immediate family members). Government-as-venture-capitalist, constructs a safety net to encourage risk taking.
“It’s not a bug, it’s a feature.”
Tibby needs to be mindful of over-selling AI, because that creates a hurdle to continuous process improvement. Which is the best that can be hoped for really.
Biased jackbooted judges encoding their beliefs into AI potentially has the virtue of transparency, adjustability, and even maybe limited accountability. I hypothesize that it’s better to tweak an algorithm than to have thousands of unelected judges exercising their whims accountable only to unelected appeals courts. (Which works fairly well in the US IMHO, all things considered, though not exactly in the way I’d like them to. We have due process (for now): perfect justice remains undefined and therefore unobtainable)
Maybe we could start with AI traffic court judges (litigants restricted to smart cars that retain data), and work out the bugs from there.
Maybe we could abandon the whole AI justice idea as assinine.
We’re like people in the 1600s talking about building space ships. Any resemblance between our ideas and some future reality with real space ships is purely accidental. And probably mostly in spite of our primitive musings, not because of them.
Maybe we could be less dismissive of our fellow posters who are looking for even modest alternatives to our current “you don’t get what you deserve, you get what you negotiate” society.
Psh, no.
Do you think it’s possible for humans to write laws that can be reasonably described as not biased?
Just the written laws themselves, not the way they’re implemented or enforced. I get that the enforcement will be biased. Let’s assume the law, for this hypothetical, doesn’t have to counteract that bias, it just needs to be unbiased all on its own. Call it a starting point, can humans create an unbiased starting point?
Laws are about human action: what you must do or must not do. Permissibility and impermissiblity are products of a given society’s subjective needs, beliefs, taboos, etc. They’re inherently biased.
Part of the problem here is that people often conflate “biased” with “fair,” but they aren’t the same thing. Compassion, empathy, and charity are all expressions of bias.
But those bigger groups are still fallible. At times you have a lone voice in the wilderness, and that voice is the right one. The US Constitution was a product of a consensus of people, and that wasn’t perfect, still isn’t perfect.
But once you hand things over to AI, you no longer have that lone voice. You no longer have any voice. Every decision will be decided the exact same way. When this AI inevitably proves to be biased, you will have no lone voices, because you have quashed them out.
The premise of this question is itself flawed. Implementation, enforcement, historical biases, and all sorts of human foibles are all inextricably tied.
Remember “separate but equal”? We had that bullshit for decades, and it was nominally unbiased and ‘fairly’ enforced.
Hell, this question and its flaws are part of what makes up Critical Race Theory. The (to be fair very common) notion that we can somehow find and use such an unbiased starting place independent of subsequent enforcement/implementation/whatever is itself one of the tools by which systemic bias is and has been perpetuated.
So, no, bootstrapping an unbiased system by starting with unbiased components is not a viable solution. My evidence: recent (and honestly most of) human history.
Besides the fact that I don’t think present AI is anywhere near up to the job, a major problem I have with the idea of an AI justice system is that realistically there won’t even be an attempt to create an “unbiased” AI judge. It will instead be carefully trained towards authoritarianism, class bias and bigotry, and its main “advantage” will be a lack of compassion and no sense of self preservation.
So it will without hesitation condemn the innocent because they fit the profile its been trained to persecute, and help push society over the edge into fascism or collapse because it has no concern for the consequences to itself. Human judges at least need to fear a mob or a dictator’s enforcers breaking down their door and killing them; an AI won’t care. A human might care about others, or at least the survival of their nation or humanity; an AI will knowingly send everyone and everything to destruction if so programmed.
And while a more “personlike” AI might avoid those problems, that just re-introduces the problems posed by human judges.
It’s a massive sidetrack (not sure where it started) but the idea of AI judges is almost as bad as the libertarian society from the memes in the OP. It would basically just encode the existing prejudice in the criminal justice system in a completely opaque unaccountable form.
Case in point (and this was another example where I was sure it was parody when I first saw it, it was so on the nose) a recruiting AI system decided the best qualified candidates are the ones who are “named Jared and have played high school lacrosse”
I guess to tie the two together, it’s back to the idea of magical thinking.
Somehow, AI or subscription protection rackets or blowing up the government and starting over will result in an improved situation for all of us and not just for haves that have almost always benefited. We just have to abdicate our responsibility and choices to other people (or machines) who, through the magic of the market/technology, will be forced to do better for us poor peons.
Because those things can’t fail to do so according to whatever pet theory people have.