:dubious:
So people. We could elect people and mix them up say every 4 or 5 years and eventually get to a decent middle ground with generally accepted values in the public space.
:dubious:
So people. We could elect people and mix them up say every 4 or 5 years and eventually get to a decent middle ground with generally accepted values in the public space.
AI legislation could be useful.
Elections would consist of voters choosing the training sets for the AI at the beginning of each Congress. The political discourse would become one of defining training sets. That of course is simply goal setting. If Congress could set goals they wouldn’t need AI.
A problem is that much of AI is aimed at stochastic optimization. That’s great for removing drag from airfoil shapes, but optimizing States Rights could………….
“artificial” people/intelligence, or just AI (as per the topic of the thread).
Or basically “people”.
Sounds like the thread is trying hard to see nails everywhere. Seems like the problem is the goal-setting. If those were already clearly defined, AI is defunct - the people could work on those goals. If they aren’t clearly defined, AI isn’t going to work anyway.
So, the real problem is any training data would already include the biases that a lot of people already consider unacceptable in legislation or would not include those biases they want to favor in legislation. AI isn’t going to solve this. It might make the process more ‘efficient’ in some sense, but until/unless people generally figure out what they want and find acceptable, this is a solution in search of a problem.
I see several problems. What if we agree on the goals (and that is a stretch) so we put in all the goals and the AI comes up with the best way to achieve these goals is to execute all people of Norwegian descent?
Of course, we would then, in a step we should have taken originally, input the Constitution along with all Supreme Court decisions and tell the AI that no proposal can violate these things.
Then the AI recalculates and determines that all non-citizen Norwegians shall have their authorizations revoked and deported and that no more Norwegian immigration shall occur. Whew. So we then call an emergency session of the (non-AI or AI?) Supreme Court which rules that this is an invalid racial classification. That is put back in and the AI comes up with: No more immigration from anywhere or anybody.
Let’s say that the result is not something most people want, but it is the finding of the AI that the proposal is the best way of achieving our shared goals. It seems like we only then have two choices: 1) follow the AI slavishly and turn over our democratic powers to it, or 2) overrule the AI because that is not the outcome the people want. But if we do #2, then it seems there is no point to the AI in the first place.
[quote=“WildaBeast, post:17, topic:848106”]
What if we let AI take over the legislative role? It would probably generate bills that look something like these:
Yeah, but I’m all about protecting the rights of the Dental Hygienist Communities. I think the smarty machines got this one right.
AI has already been used with diagnosing patient with higher success rate than done by humans.
Recently AI ‘developed’ drug was developed first time.
Government is system like any other, just much larger scale and with more parameters.
It can be done, it’s just matter of when and how we apply it.
You cannot make that assumption at all. Of course, it may be possible, but you cannot simply say that it is just a larger problem. Algorithmically, you cannot be certain that an existing approach to a problem will be applicable to another completely separate problem. For example, it is possible that the search space that would define legislation may not be searched efficiently using current techniques, and there may be no technique that can search it efficiently (No Free Lunch Theorem).
Can the OP actually come back and engage in the debate? I notice 3 total posts in the user history.
There was a fair amount of discussion in the thread about why this idea is a misunderstanding of what AI is in the first place and why it’s a different problem. But there was pretty much no acknowledgement of that or any engagement with the thread.
Patient diagnosis is comparatively simple - there’s a “correct” solution. Legislation is rarely so simple. There’s no “correct” solution - it depends on what people want, which can change and which can also be different from what they say or believe they want.
So, does the OP have anything more substantive to add to the points that have already been brought up or just interested in reviving the thread with no additional insight or information?
Why don’t we just install neural implants in all citizens and rule by the collective hive-mind? It’s inevitable.
I recall an ancient SF story of a robotic pope. And a newer Firesign Theater gag with a robotic US president - but something slipped. “Oh man, you broke the President!” I think the e-Pope fizzled, too.
Alas, an AI legislature, judiciary, or executive would be susceptible to breakage, hacking, or just plain coding errors. Oops. It’s bad enough when they overcharge my phone bill by $65 trillion. Just see where our taxes go when we’re watched over by machines of loving grace.
Possibly Project Pope by Clifford Simak?
No, I’m vaguely thinking of one maybe 20 years older, 1960-ish. Thanks for trying.