I think I would welcome robot police and courts, you?

Any kind of artificial intelligence close to humans, more advanced or even more primitive will have a robot making the decisions that could lead to bad decisions or the robot not obeying the programming code of out come of action.

If you tell robot you have two paths to take. Red road or the black road but you cannot take the red road under x condition. If such a robot has artificial intelligence want to say the robot will understand two outcomes and disobey the programing code.

Only way is for a robot to have no intelligence or decisions making but every acting is governed by obedience command line code:(:frowning: a mind less robot borg drone in star tek. Only such problems will come up is hacking or bad programming code. We all know how well programming is with Microsoft windows vulnerability and security updates .

Humans have the intelligence you want!! Well humans know the rules yet human police officers disobey the law and PD policy rule book. How could you think robot with any intelligence or decisions making not disobey it too.

Good luck programming in a kill switch if robot disobey programming command. If there is vulnerability or security problem like Microsoft windows.

We cannot program OS that alone robot.

I’m hanging on for robot hookers.

Are you a robot?

That isn’t really a robot doctor, that is a telepresence doctor. A robot doctor is a man made machine that evaluates, diagnoses and creates/implements a treatment regimen w/o any human input.

Watsondoes the first two a little bit, but not the third. Soon robots will do all 3 and I cannot wait.

**FOR THE SOVEREIGN
STATE OF ILLINOIS

Order of Pardon**

I, Hubert Daniel Willikens, Governor of the State of Illinois, and invested with the authority and powers appertaining thereto, including the power to pardon those in my judgment wrongfully convicted or otherwise deserving of executive mercy, do this day of July 1, 2001 announce and proclaim that Walter A. Child (A. Walter) now in custody as a consequence of erroneous conviction upon a crime of which he is entirely innocent, is fully and freely pardoned of said crime. And I do direct the necessary authorities having custody of the said Walter A. Child (A. Walter) in whatever Place or places he may be held, to immediately free, release, and allow unhindered departure to him . . .

Interdepartmental Routing Service
PLEASE DO NOT FOLD, MUTILATE,
OR SPINDLE THIS CARD

Notice: Failure to route Document properly.

To: Governor Hubert Daniel Willikens
Re: Pardon issued to Walter A. Child, July 1, 2001

Dear State Employee:

You have failed to attach your Routing Number.

PLEASE: Resubmit document with this card and form 876, explaining your authority for placing a TOP RUSH category on this document. Form 876 must be signed by your Departmental Superior.

RESUBMIT ON: Earliest possible date ROUTING SERVICE office is open. In this case, Tuesday, July 5, 2001

WARNING: Failure to submit form 876 WITH THE SIGNATURE OF YOUR SUPERIOR may make you liable to prosecution for misusing a Service of the State Government. A warrant may be issued for your arrest.

There are NO exceptions. YOU have been WARNED.

Virtual reality porn is about to arrive. I’ve seen videos of a robot hand with a fleshlight that jerks you off in sync with the VR videos (no wonder the terminators were so pissed off at humanity). Progress is being made, you have just have to be patient.

:smiley:

I was hoping someone would post that! Couldn’t remember enough keywords to google it.

I’m very familiar with the state of the art in machine learning and intelligence; one of my current research projects is to use Monte Carlo random walk simulation and genetic pruning to evaluate a multi-body dynamical system for reliability and robustness, and identify design solutions to optimize for these or other specified parameters. This capability–when it works–privdes the ability to filter through vastly more design solutions than a human team of system engineers and analysts and can even identify novel combinations of the feature set. But this is very specific application of directed heuristic methodology; it is not artifical general intelligence (AGI). Nor are the sophisticated stochastic filtering and kernel density estimate search algorithms used by search engines and GIS applications; they allow the ability to harness ‘big data’ (enormous terrabyte size heterogeneous databases) but they don’t do anything they’re not directed to do, and they don’t ‘learn’ in a generic sense of building new processing pathways. ‘Fuzzy logic’ has become a generic pop-sci term which excuses science journalists from explaining or even understanding how Bayesian methodology is used to determine most probable or most optimal conditions. Complex adaptive systems are not AGI; they’re just systems with an ability to make certain determinations of rules based upon sampling and filtering methods.

Artificial general intelligence–the machine equivalent of human cognition–continues to elude researchers working in the field, nor do neuroscientists have more than very crude models of how cognition occurs in a holistic sense in human brains (or those of other complex animals). Natural language processing and interpretation has advanced incrementally, as has machine vision, learned proprioception, and other multivalued complex operations requiring extraction of specific conditions from abstract rule sets, but producing a machine that can hold a genuine conversation about complex philosophical or psychological concepts has proven to be a far more difficult problem than just implementing stochastic filters and genetic algorithms. The way the human brain functions is fundamentally not like computer hardware and software, and it seems likely that AGI is more likely to come from adaption of or merger with the mechanisms of human (or at least animal) cognition, e.g. the wetware of organic neural networks.

Stranger

I’m kinda reminded of my high school and college classes.

Some bullshit essay on an aspect of WWII. My analysis on was either too simple or just fracking wrong because of some pet theory the instructor favored.

Grade questionable and I wasn’t happy.

Some bullshit story/poem I had to write for and English class. My fine work was either to “cliche” or “not flowery enough” or some other shit.

Grade questionable and, again, I wasn’t particularly happy.

Calculus class test. Integrate Function X. I don’t know exactly how to do it or I fuck up it up somehow in the process.

Yet another grade I am not happy about. But at least THIS time it was arguably right or wrong.

I guess the third result was marginally less unsatisfying than the other two.

It’s a question of time and change. I would query the idea that it is quantifiable, in the sense of a fixed value for all times and circumstances: societies - and, crucially for the kinds of issue exercising public debate at the moment, different institutions and communities within societies - place different priorities on different values in different times and circumstances. So law and its interpretation/application changes, as does the subsequent reinterpretation of the principles outlined as a result of interpretations and applications in previous leading cases.

How does a robot (and/or its programmers) replace political debate as to what the law should be, judicial interpretation of the language of the law and a jury’s evaluation of the credibility of evidence, and crucially, how does all this happen in a reasonable period of time?

How do humans cope with it? Humans are just machines too, made of meat instead of metal.

The OP asks us to assume some technologies that don’t exist yet, but (IMO) are not impossible. Retraining a collection of different humans to take into account some change in the law or its interpretation is almost inevitably going to be harder to do (and less consistent in its result) than updating a collection of machines of similar specification.

This is known as “life”. It isn’t always quantifiable or based on absolute truths, rights and wrongs. And therein lies the freedom that the OP took as an a priori.

If you have the power to eliminate societal ills with the wave of your wand, why have courts in the first place?

I’m not sure about you, but I would very much not want to be forced to serve on a jury that decides the life or death of someone of whom I don’t agree is guilty and I had no say in the verdict.

I can’t even trust my toaster to toast properly most of the time. You have more faith than I. :slight_smile:

If it’s not quantifiable, how are you able to judge it right or wrong?

I’ll take a robo-pet or even a robo-wife (take out the sound card, please), but I don’t want a robo-cop, judge, jury or executioner. Not, yet, anyway. I think we’re hundreds of years away from AI that can accurately emulate the intricacies of law, jurisprudence and social thought.

I would however like to see the dash-cam concept broadened significantly to include full video and sound recording of the activities of all public servants. While a public servant is on duty, the public should have access to the public servants services.

I believe video and data storage costs have tumbled to a level making it not too cost prohibitive to provide this ubiquitously across the board. Some type of forehead-cam or glasses-mounted-cam should suffice. If you’re a public servant and you’re on-duty, you’ve got to have your forehead-cam on (you can turn it off at lunchtime or when taking a dump).

The video/audio gets streamed directly to a secure cloud and remains there till most statutes of limitations have been exceeded, then is automatically erased. What the public servant sees and hears, the public can view, by order of a judge.

I believe this will significantly reduce the amount of police abuse and also allow me to see where my mailman is wrongfully delivering my mail (I get neighbors mail delivered to my address nearly every day—and I know I’m not getting mail that is addressed to me. Where the hell are my jelly beans I ordered from Amazon last week, Mr. Mailman!)

Morally, aesthetically or for the purposes of estimating someone’s academic progress? These are subjective judgements, based on a combination of a priori standards and internalised communal or institutional experience. The kind of problems OP is trying to operationalise out of existence arise because different people and institutions have internalised different standards and experiences, which have therefore drifted apart. The only quantification that I can see applying is (a) voting (b) judges assessing precedents and trying to come to some sense of what is generally held to be right in society (which isn’t necessarily the same as reflected in election results).

I still don’t really understand what you’re saying. Is it possible to come to a common-sense judgement that simply cannot be explained or rationalised? A conclusion that is right, but we can’t explain why?

I don’t understand how we would know that such a judgement was the correct one, or why we would think so.

We’re a ways off from ceding authority to AI, though Google is working hard to prove me wrong. We might be seeing related developments soon though. Computer algorithms assist with extending loans in a big way. They are also used to help physicians interpret electro-cardiograms. It’s not too hard to imagine a program that summarizes relevant precedent and recommends a sentence based on past judicial cases. The judge could use it as a baseline or toss it out altogether as he sees fit. Over time deviations from the recommendations might get smaller since disagreements with the norm may make the judge susceptible to review by appellate courts.

Of course we can, but the problem is in constructing the system in such a way as to allow for the changing interpretations of “correct” at different times and in different circumstances, and for those changes to be rationalised/quantified/robotised at some speed.

I don’t trust humans that alone AI.

Any thing that makes choice or decision making I don’t trust.

And unless programmers of future can program OS hell lot better than programmers to day is disaster about to happen.

A robot kill switch sounds nice but if there holes in the programming than what is to say it will work or not.