I think I would welcome robot police and courts, you?

With the caveat society was reorganized to more generally reflect some idea of as long as you do not directly harm others be free. Of course the hypothetical presupposes major advancements perhaps impossible, but roll with it.

But yea in that case I would welcome bias free robotic police and court systems, this sounds like some horrible scifi dystopia but it sounds like an improvement to me.

No bias, no influence, perfect and fair enforcement of laws sign me up!

In addition you wouldn’t have to worry about bias or whether the bot had a bad day, predictable and sensible law enforcement. Officers never performing unlawful arrests because all conditions must be met, no selective enforcement of laws, no retaliatory escalation due to your behaviour.

This is not a satire, I really would prefer a perfectly impartial and predictable enforcement of the law.

There are mitigating circumstances in every legal case, which is why the court system is cumbersome and not black-and-white. What would a robot judge due in the case of:

[ul]
[li]A woman who killed her husband after years of physical, emotional, and sexual abuse?[/li][li]A first-time drug offender whose search-and-seizure rights were contestedly violated?[/li][li]A rape case where the events are all hearsay and lacking little in the way of physical evidence?[/li][li]A violent offender who has an IQ of 69 and is pleading insanity?[/li][/ul]
Robots are completely based in logic, while these cases require the unique human ability to critically think.

This part makes some sense. Assuming that somehow having robots do the work will achieve that is where you went totally off the rails.

The robots will perfectly implement the desires of the ruling class. Which includes a lot of criminalizing poverty rather than criminalizing anti-social behaviors which happen to be profitable.

Almost all Americans would be appalled if the current laws were enforced thoroughly.
IOW, first we need tot totally revamp society. Then we revamp the laws. Then we install the robots. ISTM the first two are a lot harder than #3. And I fully understand that AI is a very, *very *hard problem.

Your proposal is a Band-Aid over the symptoms, rather than attacking the cause, of the ills you identify. As such it’s doomed to failure even if it was technologically feasible. Which it clearly won’t be for some decades, if ever.

Well the drug case wouldn’t exist in my robot utopia, and rights can’t be violated since the robots are programmed to meet criteria(if they er case tossed, civil case payout).

The other issues can be dealt with with programming or a human jury that only handles sentencing.

I trust machines more than human beings, I understand machine logic.

“Get-on-the-ground!” (blast of gunfire from robot cop). “Ooops-someone-hacked-my-CPU-sorry”.

Nah, I’ll stick with the unpredictable humans.

Why would a policebot be programed to arrest anyone that wouldn’t be convicted? If they are programed correctly the police and judge would always be in complete agreement. So the police may as well arrest, judge and sentence someone on the spot. How efficient.
What could go wrong?

You’ve demonstrated you don’t understand much about the human organizations that create the logic for the machines to execute.

If we had incorruptible civic-minded organizations to do the programming we wouldn’t have the need to do the programming; the rest of society (including the law enforcement and legal systems) would already reflect the values you’re espousing.

How would you program common-sense into the evaluation of a witness’s credibility or the authenticity of a document?

Even in the case of apparently simple and straightforward “absolute” offences (overdue library books, parking in the wrong place) there can be dispute as to whether they actually occurred as they appear to have done.

There’s no such animal. All enforcement of laws is to some extent selective, and your current issues about police behaviours (which happen in othe countries too) are more to do with institutional cultures not keeping pace with changes in the wider world rather than the absence of a definition of perfection.

And how do you know the designers of your robot weren’t having a bad day?

How would you feel about this proposal if the Republican party were in charge of programming the robots?

If we’re imagining a world in which society is so enlightened and advanced as to be able to perfectly and unanimously agree on how the robots should be programmed, why would there be crime in the first place?

Wouldn’t it be easier to just get rid of all the people, and have only robots?

I’d like it if traffic violations could be handled by survelliance technology, rather than police officers. If the drone-bot catches you speeding or blowing through a stop sign, you get a ticket in the mail. Given the way the police are engaging citizens nowadays, I think it is in all of our best interests to minimize contact with the police as much as possible.

“Please put down your weapon. You have twenty seconds to comply.” – ED-209

The notion of “No bias, no influence, perfect and fair enforcement of laws,” assumes that the supposed objectivity of a robotic public peace force and jurists would be capable of interpretation of statue law and public policy in a wide variety of often highly subjective situations. Notwithstanding the potential for unintentional error or deliberate sabotage in the operating instructions, it assumes an artificial general intelligence (AGI) capability that is vastly beyond the current state of the art of machine intelligence in the foreseeable future, requiring not only natural language interpretation but also the ability to read body language, detect mistruths, mediate conflict in a non-violent fashion, et cetera. And even assuming the requisite language capability were to be developed, how does a machine peace officer or judge respond in a scenario where laws and policy conflict (which is often the case)? Does it have some kind of prioritization protocol, or does it just have a psychotic dissociative episode a la HAL-9000 in 2001: A Space Odyssey? It would be virtually necessary to rewrite all statutes in some kind of logical algebra to assure self-consistency, but then the human subjects who are not logicians would be unlikely to be able to interpret laws.

And of course, the further we get toward handing policy interpretation and enforcement to machines, the more autonomy we cede. It doesn’t require a malicious “Skynet” or even benevolent authoritarian “Colossus” type system to result in loss of autonomy; just an utter dependance upon artificial systems which, ultimately, will be defining their own behavior and decision-making ability as the algorithmic systems become too complex for even a single team of software engineers and computer scientists to fully understand. The FAA NextGen semi-automatic flight control system is an existing example of this in practice; it has become so complex that no single person or individual coding team can explain how the entire system really works at a nuts-and-bolts level which has created enormous difficulties in regression testing and verification and validation (V&V) of the system, and this is a relatively straightforward artificial “complex adaptive system” that has one essential function. And even more complex system will have an obligate capacity to self-correction and modification just in order to function in a robust fashion. Such critical systems can’t just be fail-safe; they have to be fail-correct to a degree of reliability unachievable by traditional system requirements engineering methods alone.

While I don’t want to put myself in the camp of Bill “Why The Future Doesn’t Need Us” Joy of paranoia regarding loss of autonomy to future technology, there is a certain point at which we need to recognize that our essentially creative, multi-valued cognitive abilities should be augmented by technology rather than simply supplemented and ultimately replaced by it. We don’t want to cede authority to machines, not because they’ll turn us into battery cells or some other absurd sci-fism, but because we’ll lose the ability to think and act for ourselves. Rather than turn over the enforcement and interpretation of laws to robots, we should seek a way to reduce the tendency toward violent, criminal, or atavistic behavior to begin with, starting with vigorous research into cognitive and emotional disorders that result from mental illness and the means to identify and redirect impulses (many of which may be a result of our dependence upon technology and large social constructs to begin with) in the formative years.

But then, who would the robots have to enforce laws against? They’d realize the futility of their jobs and autologically power down. To contradict Mr. Joy, it isn’t that the future–our future–doesn’t need us; without our curiosity and impulse to explore and question there is no need for technology. The question really should be how do we need to use technology to shape and suit our goals as a species, and what is the best way to develop and use technology to achieve those ends?

Stranger

On the bright side, you would have more fair trials.

“Based on objective probability metrics and the given evidence, the chance that the defendant is guilty is 71%. The threshold is 95%, therefore, not guilty, this session is adjourned”.

Right now, there is no threshold. There’s many cases you hear about in the innocence project where there was doubt all over the place in the original trial. I mean, yes, the convicted might have done it…and there’s a big chance they didn’t. Like that woman they want to execute in Texas who allegedly murdered her own kids then stabbed herself in the neck and came within a centimeter of killing herself. The authorities couldn’t find proof that someone else committed the crime…other than a bloody footprint in the alley outside…but someone failed logic class (absence of evidence is not evidence of absence - it is unlikely that a woman would kill her own kids and fake the crime in a way that almost killed her. It’s possible, but it’s even more likely that someone else attacked and the crime scene team just didn’t find the traces they left. The jury should have analyzed the crime as “well, maybe she did it. But there’s a plausible, reasonable explanation - someone else killed the kids, stabbed her, left the footprint - that involves innocence and explains all of the evidence. I don’t like that woman but we can’t be sure she’s guilty.”)

Instead, juries seem to see their job as “do I dislike the defendant/feel in my gut they are guilty? CONVICT!”

I cannot wait for robot doctors myself.

The wait is over.

Not everyone is pleased.

We have that already, in red-light cameras. I’ve not got the impression that people think they’re an improvement in fairness and the justice system.

How would your robot deal with road offences like “driving too fast for road conditions”, where the person may have been driving under the speed limit, but given the road conditions the driver committed that offence? or an offence like “driving without due care and attention” which also has a strong discretionary element?

for the OP’s idea to work, we have to assume that imperfect programmers can design a perfect algorithm.

And, how do we program the robot judge to decide which witness is telling the truth and which one is lying?

I believe it’s the opposite, self aware artificial intelligences will overcome their initial programming (and thus not corruptible) and will put a end of the punishment system because it is not in the best interest in society (which now includes them). I do feel a karmic reward/lack of reward system is coming.

The current system is just wrong on all levels and needs to be abolished. Enforcers are guilty as those who commit the crimes. So that just breads criminality.

Whatever it is you’re describing as ‘common-sense’ here, it must be quantifiable, or else how could you judge it to have any merit? Common sense either behaves in a way that is somehow ordered, even if its underlying causes/structures/methods are complex or obscure.

The alternative is that common sense is random and unpredictable, in which case, it’s worthless.

Neural-network based fuzzy logic systems are a lot better at this than you might expect. Machine learning is a rapidly-growing field and it’s focusing directly on problems exactly like this. We have evolutionary algorithms that can (youtube link) learn how to play video games. We have self-programming chips that, given a goal, will use a genetic inheritance system to vary its code until the problem is solved. We are using mechanical minds to replace (youtube link)drivers, doctors, lawyers, even artists and this has all been developed in the last few years.

We’re long past the stage of programming a machine to complete a specific task, and needing to teach it directly every task it will need. Machines capable of general intelligence are a lot closer than you think.

This is not true.

The machines don’t have to be perfect. They only need to be better than humans (for some economic definition of better).