AI Computing For President

Forgive me if this sounds like a stupid idea (and feel free to point that out), but I’m proposing a form of governance based on artificially intelligent computing. The reason behind this thought, besides the imperfection of mankind, is that Earth’s population has grown so exponentially that wherever you go you’re liable to find complex, interwoven and yet polarized networks of interests that no amount of reason or consideration in a political decision-making process can ever contain or appease. The current form of governance in the most democratic nations is suffering with the very core tenet of democracy, that is, every single person is entitled to an equal vote in an election, and this includes all kinds of idiots who thoughtlessly and irresponsibly continue to make matters worse by voting like shit-for-brains, and this is no more apparent that in the current successful bid by a person like Donald Trump. You may hate him all you like, but you can’t dismiss the fact that the thing he’s the most aggressive symptom of is democracy. With my proposed solution, however, there’ll be none of that shit, and there will be no presidents or political leaders with dicks that need to be sucked or agendas that need to be fulfilled or conspiracies that need to be effected. Indeed, a supercomputer will only need maintenance staff, a reliable programmer, and perhaps some polishing every so often. The benefits of this model of governance are mind-blowing, and no more will political or economic decisions be made at the top or bottom of any clientelistic network; no decisions will be made mindful of popularity, public opinion or the media; no decisions will be made to suck up to this or that political actor. Everything will be absolutely evidence-based, impartial and pragmatic in a process that sees the supercomputer gather all the necessary information about a situation, apply an AI model to it, and make the goddamn decision without a click of voltage.

An additional benefit to this model is that all decisions will be made based on the equality of all people before the coded master. This means progressive decisions will be made on fronts that are usually sensitive and hard to get past representative majorities and the general public, such as allowing same-sex marriage, abortion and things that don’t sit well with the conservatives that the liberals need to wait until a generation of conservatives has died in order to shake things up a little bit more with the next one, which is an absurd approach to getting anything done when the rights to be homosexual or anything-sexual or to abort a fetus should be recognized and respected beyond whatever the hell anyone thinks, believes or reads in their Mother Goose books. Furthermore, this model will have a significant effect on how people protest, because they know there is no changing a computer’s opinion on a decision that was made based on facts. In such world, lobbying would be rendered meaningless, and things that will benefit the vast majority of people in the most versatile way are going to be in effect, while ensuring that no single case of injustice goes unredeemed, and that’s that. The same goes for foreign policy, and there’ll be no more nonsense about national dignity and sovereignty and all that fraud in the name of which millions of people have died and continue to die.

The only problem with this whole thing though is that the supercomputer will still have to make self-serving decisions, in terms of preserving the status quo and ensuring an uninterrupted supply of electricity, but it’s still something I can get behind, bearing in mind that the benefits far outweigh the cost, is all.

GIGO-Garbage In, Garbage Out.
Who would be doing the programming? Who would decide what information would be fed into this computer? Who would enforce the computer’s decisions, and who would have the authority to override them?

  1. Who would be doing the programming?

A: If there is a possibility for this to happen, you can be sure whoever puts it in effect will be an extremely progressive government/entity, and the question would be more about how the decisions will be made and what metrics will be used to determine the common good.

  1. Who would decide what information about be fed into this computer?

A: If the sufficiently advanced technology to manufacture this gadget is available, it won’t be difficult to imagine a situation in which the computer will retrieve information on its own without needing to be ‘fed’.

  1. Who would enforce the computer’s decisions?

A: Again, assuming at all it is possible, then the computer will have enough legitimacy and control to be served by an executive branch (of humans, that is). Not an army of robots, if that’s what you’re thinking about.

  1. Who would have the authority to override them?

A: No one, and that’s the whole point.

1.A.Q>Is this just a given in your scenario, or are you assuming that this almost guaranteed to be the only possibility? If the latter, you need to back it up.
2.A.Q>If the computer is grabbing all sources of information, it’s going to grab a lot of worthless shit and lies, and once it is known that the computer is doing this agencies will force-feed it propaganda.
3.A.Q>People have love for an enforcement arm they have absolutely no control over. All decisions enforced will be seen as “cold” and “machine-like”
4.A.Q>CompuGod ain’t gonna last long.

I think there will come a day when an AI can do a better job than a human at the role of governing. The problem is, the way we can get to that point, will involve designing AIs that end up fairly inscrutable to us–so there will be no way, really, to be sure that the benefit being served by the AI’s governance is OUR benefit.

  1. I don’t have anything to back this up with. I’m just assuming this would be the case, because who would actually go through the cost, time and pain of proposing a machine capable of rational, responsible and fair governance if not a progressive?

  2. That’s true, and hence the AI.

  3. That’s the point. It’s not different from how law is applied around the world, and the difference between low- and high-trust societies. People will have less problems obeying an annoying law that they know applies on everyone else equally as long as they know for a fact it does. The reason you pull over to the side of the road, receive a fine and pay for it is because you have a level of trust in the system. If you see other people getting away with it, your behavior will change. People like to get instructions and be told what to do, but they also dismiss a lot of good advice or oppose a lot of good causes owing to the person who’s giving the advice or promoting the cause, including their background, race, religion, accent, hairdo and everything else about them as a person. I’m hoping you now understand how the matter stands in the case of a computer issuing cold and machine-line orders, or suggesting them, or providing them as a government consultant.

  4. NO ONE TALKS LIKE THAT ABOUT COMPUGOD!!!

Vote Robotron in 2084!

Let’s assume hypothetically that such a computer exists. You would still have the same problem that you currently have with any form of government. Mainly, agreement on what criteria equals “effective governance”. How does the AI determine whether it’s better to maximize overall productivity or minimize income inequality?

Gather the facts then study millions of papers, theories and case studies in order to determine the best course of action.

On a long enough timeline this is inevitable. The quality of human decision making won’t be able to compete with machine decision making and among other things the nations that don’t use this will end up being surpassed by other nations.

Having read the thread, I have noted some points where I think the scenario as posited in the OP would fail, dramatically if hypothetically enacted any time in the next 50 to 100 years.

This is the first failure point. How do you define progressive? How do you define non-progressive? How do you define extremely? What are the criteria that are not propaganda caricature that are used to determine progressive is good/non-progressive is bad? How do you arrive at a binary good/bad scenario when reality is somewhere in between and not binary at all?

this is the second failure point. with no method to override, we as a people are left with the ultimate form of politics, the use of force, or armed revolt. I believe this would happen in short order in your hypothetical system without an Orwellian totalitarian system of governance complete with newspeak and all the other bells and whistles of thought control.

third failure point, what you have proposed is anything but rational, responsible or fair governance except as how you, a party of 1 defines it. Their are, most certainly, large numbers of people who would concur with you on outcome but differ in approach and application, many of whom are “conservative”

bolding is mine
this is the next failure point. This is false. People like to get instructions and told what to do in certain circumstances, certainly. But not generally true at all. Also, how are you defining trust, as in high trust vs. low trust society and why do you think that the US, for example, is a high trust society. By that criterion alone you have divided the USA into two different countries that just happen to have an unusually large number of points in common and strangely good international relations with each other. Basically, Western United States and Eastern United States.

  1. NO ONE TALKS LIKE THAT ABOUT COMPUGOD!!!

[HULK] Puny god![/HULK]:smiley:

Here’s the thing, though, why this won’t work. Policy’s not based on facts. Oh, facts inform policy. It’s good to know, when you’re proposing a specific plan, that it looks like it’s going to have the effects you want it to have or not, but determining what policies to have is a value choice. You think slavery is right, or you think it’s wrong. You think abortion is right, or you think it’s wrong. You think gays should have the same rights as other people, or you don’t. You think being armed is a fundamental right, or you don’t. You think wealth should be more evenly distributed or you think it’s wrong to take away somebody’s property and give it to somebody else based on a dream of economic equality. The king rules by divine command or he doesn’t. The laws were given to us by God or they weren’t. My race is naturally morally superior or it’s not.

These are not factual questions, but questions of values, and ones your AI will be no more qualified to answer than you and I. I mean, if you start with the premise, for instance, that humanity is a threat and needs to be destroyed, Skynet is acting rationally.

Since you mention Donald Trump as a symptom of the problem, keep this in mind:
He is a billionaire, and could hire programmers to write this software to his liking.

The last time we tried something like this, it didn’t turn out so well…

I say we make it a data point.

Sure, have an AI comb through all sorts of evidence before making recommendations to the human decision-makers. But there’s no need to give it unilateral power, and one glaringly obvious reason not to give it unilateral power.

Let it make its case for a course of action. It’ll probably do a better job of it than most people could – and if so, problem solved. But the machine might make a case that’s as flawed as it is unpersuasive – and, if so, problem avoided.

Asimov is way ahead of you: “The Evitable Conflict”.

Asimov is way ahead of you: “The Evitable Conflict”.

The problem is not “Garbage In, Garbage Out”; it is “Garbage In, Gospel Out”.

We are already seeing this in “Let the Computer Drive the car! It will be perfect!”
The Cloud will know everything and solve all our problems for us!"

Which are simply updates of “the computer says it is so, so it IS so - computers NEVER make mistakes!”.

Ummm, yeah… right… got it. :rolleyes:

Heck no! Vote Commodore 64! LOAD “",8,1! LOAD "”,8,1!
Seriously though, it’s a good idea in principle, but AI has a loooong way to go before that could work. And hopefully the AI would be programmed somewhere between not letting us leave the house so we won’t get hurt and ridding the Earth of the human plague.

“Welcome to Itchy and Scratchyland, where nothing can possib-lie go wrong. I mean possibly go wrong…That’s the first thing to ever go wrong.”

Again, it is the definition of “best course of action” that is the question. Particularly when there are competing objectives and goals.

That’s the problem with politics these days. It has become a game of extremes such that people have forgotten that people forget that there are “non-crazy” reasons for choosing particular policies.

I’ll give you a couple of easy ones:

  • How does an AI choose between protecting the environment and maximizing standards of living for the most people?
  • How does an AI decide where to draw the line between security and freedom?
  • Where does the AI strike a balance between the good of the individual and the good of the nation?
  • Does the AI put the benefits of the nation ahead of the rest of the world?
    Or what about what happens when a significant number of people decide they still don’t like the decisions and policies the AI made, even if they are in their or their countries best interest?