What if we let AI take over the legislative role

There’s probably a concept coined for it but…

What if Artificial Intelligent will take over role of politicians, but executive will be done by the people - direct democracy.
AI role is to research and advice.

Based on the current leaders of the UK and the USA, you’d need to write a program that:

  • constantly lied :smack:
  • produced slogans, not policies :confused:

… no, I can’t go on - it’s too depressing. :mad:

Just how do you envisage direct democracy by the people working on the application/interpretation of law to individual cases - say, in immigration or welfare services?

And how could AI just be doing research and advising if it’s also writing legislation?

It might (just) make more sense to reverse your suggestion, and have AI used in administration of human-made laws: but “direct” democracy isn’t necessarily better than deliberative representative democracy at resolving clashes of interests and settling the fine detail of legislation (best reserved for legitimising a proposal that has already passed through the deliberative/representative process) - while (as I understand it) AI is only as good as the original designers, and has been known to develop a tendency to embed within its processes the assumptions and prejudices already affecting whatever system you might apply it to.

Develop the AI that can do this, and then we’ll talk.

Piece of cake. All I need is a research team of my closest friends and family, let’s say $1,000,000,000,000, and no questions asked as to why all of the research is being done while we travel around the world (and, also no questions asked as to why it seems to be used to develop killer robots beep). :wink:

But Chronos is correct. The best AI in the world is not what would most would consider intelligent. It is really computational tricks that are used to solve problems without the need to write explicit logic to solve the problem (even with this definition I’m cheating quite a bit since that’s more “machine learning”, which is a branch of AI, but probably the one most people are familiar with).

While undoubtedly, an AI could be created to provide some kind of advice on a variety of issues (or more likely a suite of AIs that provide advice on narrow fields), it would be difficult to create the kind of planning and writing required to create legislation, and govern. While AIs can create plans, planning is not something at which they excel, and typically this is where the programmer enters the picture when creating an AI. The goal, and how to assess reaching the goal, is the part that is typically done by humans.

Of course, if we’re talking hypotheticals, I don’t think direct democracy is sustainable. Inevitably, voting blocs would form among people with common interests; i.e., political parties.

(Missed the edit window)

In other words, it might be possible to develop an AI that could produce a budget if told to accomplish certain goals for a budget; however, the idea of what those goals should be is much trickier (and yes there’s a bit of an infinite loop here, it might be possible to develop and AI that could decide what the goals should be for the budget … given some societal goals provided as input*, but there is currently the need for goals as input).

    • Note, this would be exceptionally difficult. The kind and variety of data required to do this would dwarf anything considered “Big Data” currently, which is also an area where AI currently struggles.

I’m not sure if I completely understand what you are asking. But companies have already started using AI and machine learning to act as “virtual lawyers”. And AI is commercially available for analyzing contracts.

The role of politicians, as I see it, is largely to influence the “will of the people” and create policy. I can see AI providing the legislative research and modeling potential economic impact. I don’t see it creating policy.

All intelligences need some defined goals and restrictions to give advice. AI is not a solution to how difficult it is to balance goals and properly account for all restrictions.

Yep. An AI could, maybe, determine the probability of a policy meeting a social goal, but not what the goal is. And it would not be able to give a precise answer, since say the economic impact of a policy could be an issue, and besides no model being precise, which model to use would also be an issue.

Seems like a great solution so long as you don’t know anything about AI.
And the people doing the coding would be the people more or less determining the outcome.
Great idea.

I’m not sure how an AI directed legislature could be considered direct democracy unless it’s running on DC power.

I would still expect resistance.

I’ll write a coulomb about it. I’ll be sure to keep it current.

I don’t think people could ever go along with this. Even if constituents dislike a policy decision, they want to know that it was made by actual humans. Humans they could vote against, criticize, try to lobby in person or over the phone, etc. To be told that Policy X that was put in place by an AI machine is going to put you out of business or severely curtail your rights would be infuriating. At least with the D’s or R’s, there is a human face to put it to.

It’s not even that easy. Using current methods, an “advisor” AI would be trained using some kind of data. And that training data would have to be carefully curated to avoid introducing the biases of the people involved in training the AI.

Good luck with that. Major tech companies with some of the smartest people around have major issues eliminating the hidden (and not so hidden) biases of their R&D staff. Those get cooked into AIs very easily. I can’t see anybody trusting a legislative AI, not just because of the AI part but because of the potential biases (or lack of biases in some cases) of the developers creating it.

As above, the question itself reveals a lack of understanding of what current AI actually is and how it works.

Construct the AI’s with governance optimization tendencies (e.g. tends to favor states rights), people elect AI’s, AI’s run their models to determine which policies result in outcomes that are generally aligned with their desired tendencies over some time period, then AI creates legislation to support that, and/or votes on other legislation based on closeness of fit to tendencies.

It’s that easy.

What if we let an ai take over the legislative role?

What if we let AI take over the legislative role? It would probably generate bills that look something like these:

This is exactly what the posters above have meant when they say that the person who designs the AI is essentially making the decisions by introducing their own biases. Why is «states rights» a «government optimization tendency»? You evidently think it is, but others will likely disagree with you and want the AI designed differently.

And what does «states rights» mean? What policies is it simple to program into the AI to maximize your vision of states rights? And what if after the AI has been running for a while, people don´t like the result and want to change it?

All of that is what elections are for.

That’s why I put in this bit, so the people get to choose which AI’s they want:

It could end up being a long running evolutionary process as follows:
1 - Elect AI’s
2 - After X years of Office then mix up the pool of AI’s:
2.1 - Discard the AI’s that did not get elected
2.2 - Keep the ones that did get elected, leave them unchanged
2.3 - Fill in the pool by making copies of discarded and elected AI’s and mutate+combine by random percentages
2.4 - Go back to step 1

It’s possible it could stabilize over the years at a decent middle ground on most key issues, but gut feel says it would cause something to tank (e.g. unrecoverable economic free fall).

It means the right of states to do with whatever I agree with, and the right of the federal government to stop them from doing anything I disagree with.