It’s already been mentioned, but I think it’s worth re-emphasizing that computers have been used in this way for a great many decades – decision support systems go back to the 60s, and AI systems have achieved spectacular successes in recent years.
That’s not a “limitation of AI”. If an AI system is trying to find optimal strategies for a particular goal, it may well suggest strategies that aren’t practical for legal or moral reasons if it doesn’t have sufficient knowledge of the law, for instance, or lacks information about human concepts of morality. So one ignores the particular strategy, imposes the appropriate new constraints, and iterates toward a different solution that applies the necessary constraints.
Where problems can arise – and we’ve already been there a long time – is when complex automated systems prevent a theoretically simple and obviously logical action from being taken because there’s simply no provision for it and no way to do it manually. This isn’t a problem with AI but just a problem with automation generally, and specifically with poor design.
And there are interesting moral dilemmas that may come up with driverless cars, where the car cannot avoid an accident and has to pick the “least bad” option. The moral dilemma is how you evaluate “least bad” – least bad for the vehicle occupants, or least bad for the totality of other vehicles and people who may be involved? If you’re about to have a head-on collision, should the car veer right where it might hit pedestrians, or veer left and drive off a cliff?
What if the AI said that people are happiest when they can blame scapegoats for their problems, and therefore you should blame everything on the Jews and round them up and send them to the camps? What if the AI said that people are unhappy more than they are happy, so the way to get the maximum amount of human happiness is human extinction? What if the AI said that religion makes people happy and so you tell them that your decisions are handed down by God, but then the people find out you lied to them and they drag you out into the streets and hang you by your neck from a lamp-post? What if the AI kept posing hypothetical questions?
Well, yeah, so who refutes that sort of talk, I don’t think Bernie Sanders did. And Stephen Jay Gould and Richard Dawkins probably wouldn’t either,even if they went into politics. I suppose if they did, and used a computer, that for some reason highlighted a religious passage, and they were so boorish and uncultured to negate that part of human culture (and I doubt they were/are of that frame of mind) wouldn’t that just chalk it up to a computer glitch?
Suppose the AI indicates you can be more successful in influencing people and implementing programs if you resort to religion.
Suppose the AI indicates you can be more successful in influencing people and implementing programs if you resort to falsehoods and deceptions.
I see no practical difference in meaning between the two.
Clearly someone has programmed the AI to propose solutions without an evaluation of moral or perhaps ethical consequences of those solutions.
As far as I’ve concerned, having an “advisor”, either human or an AI, suggest such solutions is ok. So long as those solutions are not simply implemented automatically.
As to the scenario, without knowing the specifics I can’t say which side I would come down on. Differinf circumstances might mean I’d make one call in one case and another in another.
I don’t need an AI to tell me that when I’m talking to a religious person who has a religion-based objection to something, it helps if I can put my own arguments in terms that their own religion uses. It’s no different from talking to a chemist in formulas, or in English to the people reading these boards. I wouldn’t try to use religious language with someone who doesn’t, nor would I use religion to go “God Smash” on someone, but the first is merely practical and the second is repulsive.
The uses listed in the OP have all been in place for quite a while, but we call them “business software” instead of “AI” and have a human somewhere in the line. That the human is smarter than the machine’s algorithm is generally assumed, but not always true. Computers are used to filter resumes, to automatically produce lists of “things to make / buy / move around” from a list of expected or actual sales… I’m always surprised by how many people are practically drooling on the meeting table when we mention that it is possible to set up safety stocks, or to make some materials be part of those calculations while others are not. These are people who make their living in logistics, often with postgraduate training on the subject, but having a machine do things any housewife does without thinking seems to them the height of scientific advancement. Dude, seriously, stop drooling on your laptop or you’ll need a purchase requisition for a new one…
You touch on this later, but I’m not sure I get it: what’s the big deal about AI?
Imagine that I’m in a position of authority, and that I have a trusted advisor: PhD in psychology, PhD in political science, supervises a team of expert researchers, you name it. He’s not an artificial intelligence; he’s just intelligent.
If he says that ‘resorting to religion’ will score me some political victories, do you think doing otherwise is ‘admitting a limitation’ regarding mere human intelligence? And if other folks in positions of power get so advised, won’t I be up against the same ‘pressure’ regardless of whether they get that advice from humans or AIs?
Companies already use computers to analyze data to augment decision making. Pretty much anytime you hear terms like “business intelligence”, “data science”, “data analytics”, “decision support systems” or “predictive analytics”, it usually involves using data to augment or support decision making.
Some real world examples of analytics projects I’ve worked on:
-Assisting attorneys in their document reviews by analyzing how label a sample set of documents and then applying those codings to a much larger population.
-Applying statistical algorithms to predict when your sales reps have the highest likelihood in making a sale.
-Fraud detection
-Making staffing decisions based on current utilization needs and expected future project pipeline.
That’s not really how AI works. It’s not like some virtual life coach who follows you around giving advice.
Typically analytics is used to answer a specific business question based off of qualitative and quantitative metrics.
“How many people should I hire next quarter?”
“What attributes make a good hire?”
“How likely are certain loss-incurring incidence and how much capital should we allocate to cover those losses?”
“How likely are people in certain demographics likely to buy this product?”
Maybe an AI might come up with a correlation such as “people who are very religious are more likely to purchase XYZ Product”. In a case like that, I don’t think “using religion” to sell a product is fundamentally different from using any other subject.