Future of AI

With faster computing and large training data available today, deep neural networks (which are universal function approximators) can achieve great results. This field has taken off or is about to take off. The likes of Elon Musk have started giving warnings. Some others are less concerned as of now. What are your thoughts?

AI will still be programmed by humans, and as long as humans have an incentive to rig the machines to yield the desired results, they will be.

It is difficult to imagine a scenario in which the humans responsible for programming the AI, with the necessary access and funding, will stand back and let it reach conclusions which are not in the personal interests of the men locked inside the building with their security clearance. I see no imaginable trajectory along which humans with self-interested goals will permit empirical AI to override them.

You leave Al alone.

**BeepKillBeep **teaches AI, so I’m sure they would be a good source of info.

My hope is that it accelerates because humanity has a lot of problems, and having smart AI will make it easier to solve them. I have no idea about a timeline though. However the consensus seems to be that the odds that we will hit the year 2100 with human biological cognition still being the dominant form of cognition are extremely low.

I have no idea what to do about AI that may not share our values or goals though. That is a serious threat. Supposedly decentralization would help (millions of AI), but if one AI is steps above the others, then how do you ensure the others can stop it? Who knows.

Skynet knows.

Like most big innovations, it’ll solve ten old problems…and cause nine new ones! We’re still ahead on the game, but it will be a different “same old world.”

Self-driving cars? Great! That’ll save tens of thousands of lives every year.

AI customer service reps? Wah! Tens of thousands of jobs will be lost.

Ya pays your money and ya spins the wheel.

This about sums it up for me.

Self driving cars might cost as many or more lives through malfunctions. And hacking.

Yeah but the implication I get from that article is that when used by top notch doctors it isn’t very useful right now, but it has more use in developing nations where physicians are more pressed for time.

Also AI advanced rapidly, I remember things like Siri sucking hard 5 years ago, now programs like that are much better. That is the main issue with AI, it can go from barely functional to better than the best human in a few years. Watson was garbage at jeopardy at first, but after a few years of training it could hold its own against world champs.

Self driving cars could barely go 5 miles without an accident in 2006, by 2016 they were safer than human driven cars. Human skills are not improving much, while AI skills improve rapidly. It is only a matter of time before AI skills surpass ours across the board, and then probably vastly surpass what we are capable of as humans.

Thanks for replies everyone. Right. Also, humans are good at natural perception tasks. Human level performance here is close to Bayes optimal level performance. Of course, as you said, AI has become and will become much better here as well. And otherwise in many other areas, specially in case of structured data, AI already far surpasses humans …online Ads, products recos., loan approvals, logistics etc.

Good points. I am not sure how decentralization helps. After AI has reached certain advanced level, decentralisation possibly harms. Control centralised in hands of select few organisations/Governments/companies could be better once AI reaches certain advanced level (for example when machine start having self-awareness). I may be wrong.

When the Intel 386 came out (16 bit words!) USA Today said that with all this computing power AI is just around the corner.
Neural networks are just a good way of doing machine learning, especially because they can be implemented in hardware. But if by AI you mean self-aware computers, neural networks are not the way it is going to happen. Neither is bigger hardware. We will need a revolution in how we understand intelligence.

I took the AI class at MIT in 1972. Almost everything that was at the frontiers of research then you can buy now - solving equations, getting routes from one place to another, speech recognition. But we are no closer to true AI than we were then.

Hacking…maybe. But malfunctions? Nah. How many accidents, right now, are caused by malfunctions? Brakes failing, engines freezing, steering going wonky, etc.? Some, yes, but nowhere near as many as from driver error. That pattern would almost certain continue with self-driving.

I suspect deaths from hacking will also be fairly minimal. We’ll have one or two nasty incidents every year…but nothing on the level of the hecatomb we experience every single year, right now.

I don’t think you understand how machine learning works. The neural network is programmed by the data, not by people. If you want a specific result, you modify the output not the programming. Or feed in carefully selected data sets.

Current systems can be hacked in any case. If a hacker turns off the brakes or causes the car to accelerate at top speed, it won’t matter much who or what is driving.
That’s the interconnected world/ Internet of Things problem. That is something to be scared about.

Wow. That’s great!! I suddenly got interested in ML and studied alot in last couple of months (multivariate calculus, inferential statistics , probability , Andrew Ng’s courses on ML and on deep learning, python…).

Quite interesting! I like to go through and appreciate the derivations of various things such as normal distribution and its probability density function, directional derivatives, directions of steepest accent and descent, Back-propagation in NN etc.

AI has tremendous potential to help humanity. Most current AIs are best thought of as computational aides to human thought or abilities. In other words, AIs aren’t really that smart. They’re can very good at doing very specific tasks. But so what, lots of software is good at doing specific tasks so what makes AI so special? With traditional software, the programmer is determining how to solve a problem and encoding that process. With AI, the idea is that you don’t need to tell it exactly how to solve something, you only need to define the problem in a solvable way and then the AI will determine how to address it. That’s what makes AI so valuable as an aide to human thought. If we could think of how to solve a problem, we wouldn’t need an AI to help us, we would just need computational power with a traditional program.

So, what are the realistic dangers with AI? It isn’t AI that is intentionally malicious or hostile to humans, i.e. Skynet. It isn’t an AI that outgrows its programming and views itself as superior. No, the danger with AI is really the same with any software, namely flaws. Quite a few textbooks are starting to address this and pointing that as AIs become more integrated into systems it is important that part of their programming needs to be an understanding of the human interests so that it isn’t accidentally hostile to humanity. For example, suppose you put an AI in charge and of bread production such that it can’t easily be taken back by people and you tell it to maximize profit. It the AI learns that if it barely produces much bread, the prices rises a lot and this becomes the maximal point of profit, well that’s not good. It is kind of a silly example, because a real example would be more complex and would take layers upon layers of obfuscation, and that’s why it could happen. A real-world example is the AI bidding war that happened on Amazon resulting in the book “How to Survive Personal Bankruptcy” being listed for USD$2.3 million.

The good news is that these sorts of issues are being recognized and starting to be addressed in a more systematic way. The bad news, you can’t really prevent some programmer from writing bad code and access to AI routines is only going to become more common; however, that’s true for even non-AI software. If a bad programmer writes poor code for a nuclear safety valve, that’s not good. We prevent such things from happening right now by having good software engineering practices for writing low failure rate software but there’s nothing keeping somebody or some company from writing catastrophically bad code. Maybe, access to AI makes such accidents worse, but really, not by that much. If somebody crashes the financial market with a badly written AI or simply badly written traditional code, does it matter what algorithm they used, the market is still crashed either way? Maybe in some small way because it is easier to use AI poorly due to poor understanding of how it works, but to me this is a very fine line.

Overall, I wouldn’t lose any sleep over it, and in my view, AI is more of a boon than bane.

No, the point about neural networks is that they program themselves and no one understands the programming. They share this with genetic algorithms. In fact, may neural networks are a version of genetic algorithms. What we do do is set up the desired outcomes and let the computer use an incremental hill-climbing technique to find the program.

Consider this: we are programmed by a genetic algorithm (literally) that occasionally throws up people like that nut in Las Vegas.

My relatively uninformed thoughts on this issue are that AI will erode the value of human labor; it won’t bring the value of labor to zero, but a large middle class will not necessarily be a feature of a society with advanced AI.

I kind of look at it this way - a large portion of the world’s population still shits in rivers despite the invention of indoor plumbing decades ago. By the same token, AI will be a factor in the lives of some of the population, but the rest of the population will remain in an unadvanced state of existence. The people at the top probably won’t have much to do with the river shitters at the bottom, they probably won’t think much about them one way or the other, much the same way people from the first world don’t really spend much time interacting or thinking about people from the third world today - current political fashion trends notwithstanding.

Everything will be fine and copacetic as long as the less technologically advanced portion of the population doesn’t start shitting in the wrong rivers.

Even worse, if the system does something flaky, no one knows if there is a bug or if the system just produced an unexpected output based on its input data and learning.

I just turned in a column for a special issue on self-aware hardware systems, where the system can modify its goals. We can’t even verify big vanilla hardware systems - it is scary to think of verifying ones that can change.