This is parody, right? (Meme about extreme libertarianism)

Was this written by a human? How would we know for sure?

Tibby, blink three times if a sentient toaster has a gun to your head.

.

And, the bigger question, doesn’t this discussion sound like the first scene of a bad science fiction movie?
(Where the last line is spoken over a desolate landscape: “We were so focused on whether we could do it, that we never asked if we should do it…”)

Not long ago, the notion of a chess engine defeating a grandmaster world champion seemed like magical thinking. Yet today, AI stands virtually unbeatable. Not too long ago, I’m sure many would think it magical thinking to imagine AI driven vehicles, but we’re now on the brink. We’ve witnessed the transformation from yesterday’s magic to today’s reality.

I reject the idea that bias is an elusive property beyond AI’s grasp. We can and must weed it out. I don’t understand the reluctance to believe bias can be effectively weeded out of AI. I’d be horrified to learn, for example, that someone’s AI-driven car, in an effort to avoid a headon collision chose to veer into a black person, or a woman, instead of a trashcan because racist or misogynistic bias wasn’t weeded out of the program.

AI already plays a role in our judicial system, albeit as a tool. But I see it evolving further, not stopping here.

Here’s a couple relevent articals:

https://dash.harvard.edu/handle/1/37377475

https://www.linkedin.com/pulse/ai-judges-how-we-can-use-make-our-justice-system-fair-mehmet-beyaz

Magical thinking? Programmers (and grandmasters) worked nonstop on developing chess programs pretty much as soon as computers were a thing, from Alan Turing onward.

Limitations in chess engines were always about Moore’s Law and had nothing to do with AI. Computers simply lacked the power to run algorithms necessary to play at a high level.

By the time neural networks became a thing in chess, computers had been beating grandmasters for a couple of decades.

There was not really a question of whether or not a chess engine could eventually beat a grand champion. It was a matter of waiting for the computers to get powerful enough. We lacked basic horsepower - the basic computational bits have been there for decades. Perhaps the general public didn’t realize it would happen until it did, but the folks who knew about computers understood it was only a matter of time until the silicon caught up. For them, it was a question of when, not if, because the basic problem was already solved. Faster computers would be better computers for chess.

Those same folks are less sanguine about AI as judge and jury because the same analysis doesn’t work. Computers at a fundamental level were already there for chess decades ago but weren’t fast enough. That’s not true for the justice system. The algorithms don’t exist in the first place. The basic problem hasn’t yet been solved, so waiting for the machines to get faster doesn’t mean they will get better.

Take a chess engine and ask it to generate AI art, and it will fail - it’s not designed for it. Likewise, take an engine for AI art and ask it to win chess tournaments, and it won’t do well.

The first article you cite say pretty much what I’ve been saying - that for the administrative tasks, i.e. the ones that can be reduced to simple tasks and rules - AI works. It is a qualitative and quantitative leap to go from there to AI that can replace our justice system.

The second article is mostly an opinion piece. And one written by a CTO, i.e. somebody whose livelihood exists on the notion technology can and will solve all our problems.

Even before computers, for that matter. As soon as any kind of calculating machines became a thing, people who understood that chess is a fully deterministic game understood that it was theoretically possible for a machine to calculate a theoretically optimal sequence of chess moves. Nothing “magical” about it.

As you [ETA: and Great_Antibob] note, the only reason that “chess engines” didn’t reign supreme much sooner was the practical limitations on their computing power.

I was referring to the Deep Blue/Kasparov match in 1997. Maybe enlightened programmers weren’t shocked by the outcome, but the general public and chess aficionados were.

And at that computers weren’t beating grandmasters because of some deep understanding of the game: they were brute-forcing a few rules of thumb and then exploiting what amounted to an internal moves library to pick a statistically most likely to lead to victory move.

Precisely and now the situation is reversed. The programmers would be shocked at an AI developed in the next couple decades good at administering a justice system while the general public is out there coming up with Frankenstein scenarios of ultra-powerful AIs.

Okay, sure!

But you’re trying to use this as an example of a transition from “magical thinking” to reality, and that’s simply an inaccurate description of what happened. Deep Blue (and Deep Thought a decade earlier) was really really neat, but that was always where it was going. It was a logical outcome predicated on steady and predictable increases in computing power.

What you’re positing is not a logical outcome to a steady and predictable increase in computing power. A true artificial intelligence capable of administering justice in a perfectly fair manner is (currently) science fiction. It’s not just “what we have now but running on a more powerful machine.”

That comment nearly fried my chips. I’ve got to go offline and cool my motherboard.

Humans have never been able to weed bias out of any system we’ve created. Why are you so certain we’d be able to do it with AI systems? If we could make AI systems without bias, why would we need them to adjudicate our legal system in the first place? Why wouldn’t we just make the justice system itself without bias?

For one thing, a typical trial includes a single judge, alongside one or two attorneys on each side of the defense and prosecution. The strength of numbers can be a powerful force in mitigating bias, assuming a collective commitment to impartiality. When a system of checks and balances is in place, biases tend not to be cumulative; rather, they are diminished.

The verdict of a case seems more trustworthy to me when determined by a panel of 12 jurors rather than one. Similarly, I would prefer the judgment of a collective of judges over a single judge, who might be influenced by transient personal issues. I’d also like a team of lawyers on my side, rather than one.

A lone judge may be more susceptible to undisclosed biases compared to a collective of judges. Likewise, a team of thoroughly screened programmers is less likely to exhibit bias than an individual programmer, or a single judge.

The only self-described libertarians I have met, also tend to be wealthy, or at least upper middle class.

There is a strong “fuck you, I’ve got mine” attitude. I am yet to meet a libertarian who is in the working class, trying to support their family.

In my opinion, a Libertarian society would go fast into a “Lord of the Flies” situation, which is not far removed from pure fascism.

Screened by who?

Screened by my gaggle of well-respected sitting and retired judges, as I mentioned upthread.

Right. Who was screening those judges for the programmer screening job, again?

That would be me.

No, I think you’re asking for too much accountability. A manageable team of respected judges is fine, but too many cooking judges spoil the broth.

Managed by who? Respected by who?

I’ve met a couple of academic libertarians who didn’t have that vibe.

A little orthagonal, but I liked this image:

Managed by the team leader. Respected by the large cross-section of folks who voted them to the position.