This is parody, right? (Meme about extreme libertarianism)

How would a human create an AI without including the subconscious biases that humans unknowingly harbor?

By weeding them out over subsequent iterations.

Who’s doing the weeding? How do the weeders not inject their unconscious biases into the process?

For the crucial task of creating an AI judge, I envision assembling a well-vetted, inclusive team of coders. Additionally, I expect qualified overseers to meticulously review their output. I believe that Version 1 will be at least as fair as a competent human judge, and subsequent iterations will strive to approach the ideal.

Who vets the team? Who picks the overseers?

I dunno, how about a team of well-respected sitting or retired judges? Perhaps they can be elected to the position.

So we elect a group of qualified individuals, perhaps using some kind of proportional representation. These elected officials, whose beliefs will reflect the biases of those who elected them, will be responsible for appointing unelected officials to carry out certain duties which will ultimately dictate the means and methods by which the government enforces its laws, presumably as outlined by legislation or even new amendments.

Congratulations, you’ve invented an executive system of government. :slight_smile: But you definitely haven’t removed human bias from the equation, because you can’t remove human bias from human systems, and anything we program will itself be human system.

Respected by who? Sonia Sotomayor and Sam Alito are both respected sitting judges, but they’re not respected by the same people, and they’re going to have very different ideas about what a properly unbiased AI would look like - just as they have very different ideas about what a properly unbiased human judiciary would look like.

Because elections have such a good track record of selecting unbiased candidates up to this point?

It’s almost certainly easier for a panel of judges to identify and weed out bias in the abstract than it is to identify and avoid bias when personally presiding over hundreds of individual cases.

Doctors can identify best health practices and tell everyone what they are, even if they can’t manage to do those things in their own lives.

Traffic engineers can identify the right and wrong way to drive on their streets, even if they personally drive like assholes.

Will AI judges ever be completely free of bias? Perhaps not. However, this should not deter us from pursuing this endeavor. Our goal is not perfection, but improvement. AI judges need only surpass the fallibility of their human counterparts. I firmly believe that they can achieve this and more.

Moreover, there are additional advantages to adopting AI in the legal system. Swift trials would become a reality, and the persistent case backlogs could significantly diminish.

In summary, while perfection may elude us, striving for progress through AI judges promises a fairer and more efficient judicial landscape.

And this is where the real disagreement is - I agree with the goal certainly.

But I think this method is flawed in the same way many technological ‘fixes’ are flawed - it treats technology as magical. AI will somehow be less biased because “magic”…er “technology”.

If our goal is a more just society, we should use technology as one tool certainly. And we have been - a lot of the statistical analyses and data collection is there and people are clever about finding new uses.

But we shouldn’t treat technology as a magical cureall for our societal ills. It’s one part and the biggest part still remains the need for we humans to become better people ourselves.

One thing AI is good at is identifying patterns, especially in noisy data. Providing human beings with such identification, across all judicial activity, could be a useful tool for humans to identify problem areas. Human. Beings.

:face_with_monocle:

If this is a joke about using AI, it’s a good one.

I was thining of AI and robots, except without the AI.

People have an extremely wrong idea about current “AI” starting with the supposition that it’s actually AI, which is not.
It’s a good tool to generate somewhat coherent text.
Coherent text != text based on facts.
Coherent text != text based on ethics or law.
Just that, coherent not-totally-gibberish text.

It really doesn’t. It mostly just offers us injustice at a faster rate, and I’m not sure that’s a positive.

Yeah 100% this. The MAGA movement, and the overwhelming seamless transition from “free thinking libertarians” to card carrying MAGAts, has completely destroyed whatever legitimacy it had. Supporting Trump and believing in liberty and freedom are clearly diametrically opposed beliefs that share nothing in common.

That said if MAGA is condemned to the political wastelands by an overwhelming electoral defeat for the GOP* I wouldn’t bet against a lot of those MAGAts seamlessly transitioning back to “free thinking libertarians”

`*’ - a complete across the board loss at all levels of the GOP that threatens its existence as a national political party. Something I fervently hope will happen, though it’s more of a hope than an expectation :frowning:

But think of the savings!

There is another big problem with replacing all of our human judges with AI judges. Even if we assume that AI judges have significantly less bias than most of the human judges.

The human judges will all have their individual biases but they will be biased in various different ways. You might get a judge who dislikes black people or you might get a social justice warrior judge. The AI judges are all going to be biased in the exact same way. So the social impact of these biases will be magnified. So if for some reason the AI has a bias against green eyed people, then green eyed people will become an underclass overnight.

Then there is the issue that every gamer who has played against an AI figures out. If you find a blind spot for the AI you can exploit that to your advantage. If for some weird reason the AI associates being an Broncos fan with innocence, then defense lawyers are going to recommend Bronco’s jerseys as essential court attire.

I perceive scope for measurable and adjustable bias with AI, relative to human implementors of the justice system. You can’t eliminate bias if there’s no consensus on the baseline. But for issues where there is consensus, there may be scope for automation.

My take is that a) basically all current self-described libertarians are conservatives and b) it was always thus. Call it my favored hypothesis, because I’m not entirely convinced of it. But here’s the framework.

All parts left of center (and most parts near center) are animated by utilitarianism and/or social justice for the downtrodden. It’s a coalition. Conservatives are animated not by principle, but by opposition to all parts left of center; they typically conflate radicals and liberals. Rhetoric about principle is chiefly a means of evading utilitarian balancing of benefit and harm.

Libertarians are usually (not always -see the Cato institute) disinterested in empirical observation. They like drugs and dislike religious conservatives. That’s about it though. Concern for the downtrodden? Natch. Weighing of harms and benefits? Only insofar as hypothetical Libertopia benefits outweigh actually observed harms. That may be consistent with a narrow utilitarianism philosophy, but it’s inconsistent with utilitarian practice, which is empirical.

It’s conservatives all the way down.

Moreover:

Conservative self-descriptions should be met with extreme skepticism. Because if they were the people they said they were, they would oppose authoritarianism within their clique. But they don’t, so they are not. Stuart Stevens: “It was all a lie.”

In a scientific and scholarly environment, it’s standard practice to assume good faith among the participants. I agree with that. There are many advantages; one of them is that if anybody doesn’t play by the rules you can say, “Come back when you have a real argument.” So an assumption of good faith becomes oddly enough a very loosely enforced requirement for good faith.

The problem is that this ethic tends to bleed over into sociological observation of the history of thought. Deception, self-deception, and cognitive dissonance are human traits and some humans are more introspective about it than others. So yes, assuming good faith among historical participants shields the observer from some of their own tendencies to self-deception. But it’s still an assumption, an assumption which distorts historical and contemporary understanding of politics and conservatism.