My working life is directly involved in using machine learning, predictive analytics, neural networks, and “AI” to solve problems or drive various business impacts, so I feel qualified to comment a little here.
I think there’s really two problems stemming from AI that people are worried about, and that these two problems get conflated sometimes.
The first problem is jobs and capital owners, as mentioned by LSLGuy. In the short term (last 5 to next 10 years), the techniques and tools are getting better on a pretty steep curve, and vast new classes of previously too-complex problems are opening up as amenable to solving. This is probably good in the aggregate. It will give us some great things like self-driving cars, better supply chains and logistics, better medical diagnoses, better operations, and a bunch of other stuff that Stranger and SamuelA and Wesley Clark among others mentioned earlier. But it will also eliminate vast swathes of jobs, both white and blue collar, because jobs that were formerly too complex either in 3d space or in terms of knowledge background and business processes will suddenly be amenable to automation and expert systems. This is happening today, and it’s part of the business impact that me and my team drives, and that lots of other data scientists and consulting firms are driving. And this is only going to accelerate in the next 10 years, because it literally drives today and can drive tomorrow tens and hundreds of millions of value annually for large companies, and as the solution space expands, that value expands accordingly.
Economically, the folk that are paying for this automation are capital owners, whether individuals or corporations, and all financial benefits from these jobs being automated away are going to go to those capital owners, while all the problems from vast swathes of the newly unemployed and unemployable-at-a-living-wage folk whose jobs no longer exist are going to be socialized to all of us. Although I have my doubts and reservations about this problem actually being solved given our current political landscape, this is at least a solvable problem, in the sense that higher taxes and a Basic Income could theoretically address this.
The second problem is unfettered general “strong” AI with physical execution capability, aka Skynet. For those of us who enjoy reading authors like Peter Hamilton, Neal Asher, and Iain Banks who postulate pan-galactic post-scarcity societies with strong AI’s of all levels peacefully coexisting with humans and other intelligent species, it has likely occurred to you that the general theme of active AI benevolence towards humans and organic creatures is an absolute requirement for those societies and stories to exist; if it didn’t, humans would have been wiped out either intentionally or as an unfortunate byproduct of some larger project, given that these are godlike meta-beings whose motivations and thought processes have the same relation to us as we do to paremeciums.
This second problem is the harder one, because there’s really no way to guarantee that any strong AI created will indeed be actively benevolent towards us, or even that in the aggregate there will be enough benevolent ones vs antagonistic or indifferent ones to ensure our survival. And per LSLGuy, because there’s such huge advantages available, it is a nigh-certainty that if it is possible to create strong general AI with physical execution capability, it will be done somewhere by someone. And at that point the cat’s out of the bag, and we just have to hope and pray.
Although this problem is a lot further out than the first problem, I think it’s ultimately the harder problem, that doesn’t really have a solution even theoretically as we currently stand, barring vast changes in human nature and societal organization.
I suppose it’s possible to thread that needle with a long enough time frame prior to it happening and enough interstitial steps of progressively augmented humans and the different societies they form, so maybe that’s what we should be hanging our hats on. Everyone be sure to invest in any mind-machine interface companies as they come up, because that’s one eight ball we should all try to stay ahead of!