Artificial Intelligence: AlphaGo beats Human at most complex game - now what?

Here is a link from the LA Times: AlphaGo beats human Go champ in milestone for artificial intelligence

So - this feels like a big deal - what do Dopers think about this? A legitimate milestone?

As a dabbler in the game of Go, I’ll have to say it’s a mighty big deal.

I think the people that play chess and go overestimate their difficulty. They’re not simple games but they have no elements of probability or hidden information or social interaction. They’re essentially the equivalent of complex mathematical equations - you have all the information available in unambiguous terms and you need to solve the problem.

The day a computer can routinely beat a human at Diplomacy is when I’ll be impressed.

They are working on it and getting closer, AFAIK they have already AI opponents that are just adequate so far, but as an arbiter in the Stabbeurfou Diplomacy site it seems to be a good one.

Artifical Intelligence guys can go here to play and test their ideas: http://www.dipgame.org/

AFAIK the official Hasbro computer release up to 2005 had also an AI to play against, but it was not a good one, sadly there seems not much of an interest by Hasbro to release new versions of the computer version.

Wake me when a computer beats a top human player of Seven Minutes in Heaven.

And me when a computer beats a top human player at Mornington Crescent. :slight_smile:

Duplicate thread:

But can a computer procrastinate for eight hours while an impending deadline looms? Because until they can do that, I’m not worried.

I follow transhumanist news, but I don’t get why this is a big deal. My view (correct me if I’m wrong) is that Go is more complex than chess, but computers were beating the world’s elite at chess 20 years ago. If 20 years of AI progress means that AI can beat humans at more complex games, I don’t know why that is such a huge deal.

Could an AI beat a human at Illuminati?

How long before an AI entity is declared the legal “Driver” of a car or truck under all circumstances?

Can’t wait for the ID photo for the license…

Mostly because (1) the “inner works” of the Go-playing machine are completely different from those of the existing chess-playing computers, and (in my opinion, most important) (2) AlphaGo is based on a generic collection of neural networks that simply was “trained” to play Go. It was not a machine designed from the beginning to play Go, but a machine built from “standard” components that was then trained to play the game.

Machines built on the same architecture are used, for instance, to recognize images. DeepMind has reached an agreement with the British NHS to use their AlphaGo architecture to develop a machine that can be used as a diagnostic aid. Basically, create a very similar machine that is then trained to diagnose medical conditions.

That is the important breakthrough: The development of a somewhat “standard” substrate that can be trained in different ways in order to achieve expertise in different complex endeavours. A big advance in the field of machine learning.

Thanks! That is informative. A bit scary in its implications as well.

I think the Japanese sexbots have been doing that for a while.

Thanks. I"ve heard (I was never good at math) that game theory can be used in a wide range of real world settings, and if so hopefully this kind of device will have real world applications.

But how do you take something that plays a game on a small board with rules, and apply it to the real world where the rules and goals are not always so concrete?

Also the AlphaGO machine played against itself possibly millions of times. They had it play itself and I guess it learned from each game. Well and good, but I’ve seen youtube videos of people doing this with video games too like Mario Brothers. I think even some amateurs who did this for fun were able to create programs that learned how to beat the game just by letting it play over and over, and letting the AI know that the goal is to score the highest score possible. So I don’t see how this is much different, they took an AI and had it play a bunch of games until it learned to be good at it. But the game has a very limited and obvious set of rules to follow, while real life is not that clean.

You don’t train the diagnostic machine by having it “play against itself” millions of times. Probably what they will do is something akin to what they did when they wanted their standard collection of neural networks to be able to recognize and classify images: You prepare sets of data and tell the machine: “This is <whatever>”, and then provide it with generic sets of data and ask it to decide whether it is “<whatever>” or not. Indicate to the machine whether it was right or not for each set of data, let the neural networks rebalance their connection weights to “absorb” this new information, lather, rinse, repeat.

This takes longer than having it play Go against itself, but the end result is that you have a machine that can identify “<whatevers>”.

Prepare diagnostic information in a way that is digestible by the neural networks, and give them classes for a few years – voilà, expert system that becomes a diagnostic helping tool for hospitals.

Prepare information about a different subject in a similar way, do the same process, and you get a different expert system in a different field.

The interesting thing about AlphaGo is that it actually has not been provided with “concrete rules and goals” when it comes to the strategic concepts of the game. It was left to “learn them by itself”. It was given a big collection of games played by masters, and the criterion to “discriminate” moves was, basically, whether a given sequence of moves and positions of the board ended up winning or losing the game.

Also, remember that AlphaGo uses a generic architecture that has ended up being trained to play a game on a small board with rules… But that the very same generic architecture that was used for that can be trained for way “fuzzier” tasks.

That is interesting, but can it be used in the real world where the rules are not so easily understood? Can an AI be designed to compete to become better and better at passing college and university level math and science courses by taking tests over and over, and learning from the failures? Ie, it takes a graduate level physics exam, scores a 4%, then has to read text related to the questions it failed. Then give it a totally different test, grade it and do the same thing. Just keep doing that?

I know there are multiple forms of machine learning out there, I don’t know how they work. It is just interesting to watch from afar since by the end of the century humans will not be able to compete.

I mentioned before that I’ll be impressed when an AI wins a booster draft night of Magic: the Gathering.

IMO, this is the approach that has the most plausible hope of resulting in an AI entity that could be described as a person - not just because it’s analogous to the way that human minds develop, but also because there can be none of the vacuous argument about how ‘you just programmed it to say that’.

Here is a link to the Great Debates thread about whether we should welcome our robot overlords:

http://boards.straightdope.com/sdmb/showthread.php?p=19184439&posted=1#post19184439

I show my post which quotes a NYTimes OpEd. It basically speaks to how this Go victory illustrates how computers have overcome Polyani’s Paradox, the concept where Humans know more than they can speak to, because of the vast amount of our cognition which we don’t access consciously.