We’ve had the Ai and personhood debates elsewhere before, but I was thinking of something interesting: we don’t actually need artificial intelligence to be that advanced to start using it in elective office.
Let’s say you like Democrats in your legislature. Couldn’t you just elect an AI to vote the way a Democrat should based on a logic that mimics ideology? And wouldn’t this be advantageous because it would be immune to corruption since it doesn’t care whether or not it advances its political career or makes money? It doesn’t care if it’s liked or wins reelection.
Of course, you can have more complex algorithms. Perhaps you like liberals, but you’re from Texas so you want your Democratic AI to be conservative on guns and oil. An AI could be programmed to balance the needs of the region it is representing with its ideology.
ANother advantage I see is in the disputes over science. A sufficiently advanced AI should be able to figure out what the scientific consensus on an issue is by having access to peer-reviewed research. It could also be an expert on constitutional law and programmed to vote against anything unconstitutional even if it would otherwise be something it would support.
I’m not sure we’d want to have AI executives, due to the need for human judgment on matters of war and peace and dealing with foreign leaders and peoples. But legislatures and judges pretty much just vote the way they are expected to by their party affiliation the vast majority of the time, so why not just get the same result but without the corruption? Of course humans could run against the AIs to argue for whatever it is they think humans can do better in a legislature.
Well, two things here. All this does is shift the person who can be corrupted from the politician to the AI developer. Also, although an AI might be immune to corruption, it become vulnerable to program glitches and the potential for hacking. It is very tricky to remove humans from human systems.
Again possible, but non-trivial. Especially the first part. Being able to derive consensus requires a certain degree of reasoning that is not very easy to put into AI systems right now. The second requires only an expert system and so wouldn’t be too hard especially since I suspect that the law can be expressed as a logic system. In fact, a quick literature search shows that there are such systems already.
Oh wait, I’m getting a message from one of my AIs:
beep Yes, yes you should. Elect Kill-Bot 9000 in 2020! beep
It would certainly take major advances in AI for this to work, but those advances would be able to fall well short of a Turing test.
While developers could be corrupted, AIs being entered into elections could be certified to perform as advertised by regulators. WIth grounds for immediate shutoff and special election if they fail to do so once in office.
If we could accomplish that, then shouldn’t we able to accomplish immediate recall for a human politician that fails to work as promised?
Keep in mind, I don’t necessarily disagree completely. I think the rise of AI is going to be helpful in many human endeavors, it the focus of my own work after all. I think that AI systems might allow for enhancements of human intelligence, like aforementioned expert systems. A politician might be able to make better decisions, the electorate might be able to be better informed. But we already have access to abilities to have a more informed populace, and I have to tell from what I see routinely posted on Facebook, it isn’t working. Or rather maybe it isn’t as widespread as I would like. So ultimately, the problem for any very human dominated system is always going to come down to us.
The problem with humans is humans. Ok that’s pretty axiomatic, but still true. That’s why we can’t have nice things.
I was perhaps a bit cavalier in saying such systems already exist. What I meant to say is that there is research into such systems. Whether commercially available systems exists and whether any are good, I don’t know.
But here’s a few of things I found. Keep in mind, I didn’t read these in great detail or do a very detailed search.
Susskind, R. E. (1987). Expert systems in law: a jurisprudential inquiry. Clarendon.
Susskind, R. (2000). Transforming the law: essays on technology, justice and the legal marketplace. Oxford University Press, Inc…
Rissland, E. L. (1989). Artificial intelligence and law: Stepping stones to a model of legal reasoning. Yale LJ, 99, 1957.
Wahlgren, P. (1992). Automation of legal reasoning: a study on artificial intelligence and law. Chicago.
Verheij, B. (2003). Artificial argument assistants for defeasible argumentation. Artificial intelligence, 150(1), 291-324.
The nasty, if correct answer, is that such an AI would certainly be possible for Republicans - who have preprogrammed positions on all subjects handed down by talk show hosts - but not for Democrats - who naturally delve into the facts of issues to come up with best operational responses.
The real world answer is that we already have an equivalent: referendums. States allow their publics to vote into law simplistic approaches that carry no nuance. The subsequent effects are a basket of deplorables. The public complains of a 900-page bill but that at least has specifics that courts can deal with as cases arise. The world is that complex and must be dealt with at that level of complexity.
Any sentient adult knows that climate change is real and the single largest danger facing the world. That’s trivial. What should be done about this by Congress is neither trivial nor expressible in any simplistic form.
Treating the political process this way isn’t merely bad understanding; it’s bad thinking. You can never recover from a mistake of this magnitude.
One has to remember how the current efforts in artificial intelligence applied to public chat did fare:
Currently it is clear that the AI is not understanding, but I do think that even if it did the result would had been similar. Just remember how many humans do fall for what many are doing and saying on the internet.
Related to that I do remember how Google is making efforts to have their search engine to give preference to results that are more closely supported by facts or valid research. The pseudoscientists out there are crying foul.
Although true it isn’t necessarily applicable. That chat bot failed because it was being trained by the Internet. Presumably a hypothetical political-AI would be trained in-house and once suitably trained could be unleashed on an unsuspecting public. Sorry, Kill-Bot 9000 was influencing me again, I definitely should not have given him that mind control chip. Although I maintain the degree of reasoning required by a political-AI is beyond our current capabilities. It wouldn’t say that it would need to have general intelligence (strong AI, general AI, whatever term you like best) but it would certainly be one of the highest reasoning AIs ever built if it existed.
People can do all of those things too, we don’t need AI to accomplish them. Whether people decide whether or not to pay attention to science or constitutional issues is another thing.
What if people vote in AI that refuses to listen to science? It would be even harder to change its mind than it is a politician.
One of the nice things about people being in charge, rather than AI, is that people actually have to live here. I don’t know that I trust an AI algorithm to not take some ideas to its extremes (prevent war and famine by killing everyone for example.), people can be convinced otherwise.
The problem with the OP is that voting is perhaps the easiest part of the job of a legislator. An AI might be able to do that in some reasonable time, but not the important ones, some of which are
Negotiating compromises on bills
Conducting or participating in legislative hearings. Do you think an AI would be able to rip Stumpf a new one the way Congresspeople from both parties did?
Running an office
Dealing with constituent complaints and requests.
Getting goodies for its district.
An AI which did get elected would never get reelected.
At a lower level, how about a law-parsing system? It could look over a proposed bill of legislation and determine if it actually says what the author meant it to say.
Such a system might spot a redundant “not” that actually negated the intent of the bill. Such a system might also note that the bill contradicts another existing law, in such a way as to open things up for a court fight; the system could recommend overtly referring to (and amending) the other law.
By coincidence, today’s “Freefall” online comic has a quip about End User License Agreements being of petabyte size, so humans can’t possibly read them. We could probably – today? – make a system that reads the Microsoft EULA and warns us of terms we should be alarmed by.
The cites brought up by BkB are interesting because my first thought on this would be that AI functioning as a judicial assistant would be – and probably will be – a perfect example of practical, useful AI in government. The commercial spinoffs of IBM’s Watson are intended to do exactly that – function as assistants capable of very rapidly doing extensive research, perform reasoned assessments on it, and thus act as capable advisers in the field in which the AI has been trained.
Replacing officials altogether – especially elected political ones – is a whole different thing that will require a major leap of faith. I can’t see that happening in any reasonably foreseeable future. But if it ever did, I would suggest that it would NOT take the form of replacing individual elected officials, because the entire structure of government is premised on human imperfections and biases. We have legislatures containing representatives from many different regions reflecting different interests. We have a separate judicial branch to act as a check on the legislature, with many different judges at different levels, and an executive branch engaged in a constant tug of war with the legislature. And some of that population of governmental humans is not just a distribution of power and diversity of viewpoints, but also a way to distribute the massive workload.
None of the above is relevant if an AI ran the place. If we could ever make the leap of faith to have a well-informed, well-balanced AI make the decisions, we could dispense with all that. All that would be necessary to replace all three branches of government would be one all-knowing supercomputer!
IMO, the advantage of AI is that the AI won’t be influenced by lobbyists, and will stick to the principles its programmed with even when it’s inconvenient. Imagine Bush vs. Gore decided by AI judges, even assuming those AIs are programmed with a conservative or liberal judicial philosophy.
The fundamental problem with an AI “solution” to government, as not changed.
Consider the original “AI” government solution: gods.
They were put forward for all the same reasons that electro-mechanical “intelligences” have been. The idea that they would be impervious to bribes, steady on their principles, and unquestionable because their followers implicitly believed in them.
The flaws with AI are the same as with gods. The big one, is that it is INDIVIDUAL PEOPLE who choose the principles that these AI’s (gods) will hold to, and it is INDIVIDUAL PEOPLE who will program, and maintain the AI’s (gods).
More than anything else, the MOST fundamental problem with AI government, is that it is inherently bad for an individual or a people, to refuse to do all of the work, and make all of the choices required to conduct their lives.
Replacing humans with machines in this way, is an attempt to artificially remove personal responsibility for everything from the people who will STILL be carrying out all actions.
Am I the only one here concerned about a poster named BeepKillBeep arguing that this stuff is A) plausible, and B) not necessarily an abject disaster in the making?
If he/it had joined yesterday to post in this thread I’d get the joke. Instead he/it has been here for months. This seems more … premeditated. :eek:
We should do a startup. There are natural language processing and information extraction systems out there already. My former company sells one, though I have no idea of how good it is. (And I bet NSA has one.) Even the biggest bill is trivial compared to parsing an internet feed.
Now, I suppose a sufficiently intelligent person could write a bill to get past the scrutiny of such a system - but we’re talking legislators here, so no problem.
A fascinating training exercise for that system would be to feed it the current laws of various states and have it spit out the contradictions, confusions, and such.