This was my thought, but I’ve played Go like a few dozen times in my life (which is admittedly more than Chess) so I didn’t feel qualified to make that judgment.
I suspect that in Go good players think a bit differently at times. Note, I am not a very good player, at my best I was probably 3 or 4 kyu. Anyway, Go has a couple concepts that are odd to chess players. For example there is aji. Aji is sorta hard to explain but the best I ever came up with is ‘fuck with’. If you have a position with bad aji it means that your position has a weakness that can be exploited though it is a bit deeper than that. At times players will sacrifice stones to give their opponent bad aji. Playing stones that give your opponent bad aji can be a good strategy. However the stones played to give bad aji may end up being sacrificed. Aji is very useful when trying to kill groups… An aji move, at the time it is played, may be a lower probability move than another play on the board but it may be at a later point very important. If this isn’t clear I apologize, aji is something I know when I see it but hard for me to explain well.
Also, if you have a position with bad aji you will likely end up playing gote, which basically translates as losing the initiative.
So, yes there maybe huge differences in 95.1% and a 95% move. And a low probability move, a sacrifice stone or two, may end up being hugely important. In fact, it appears that in the last game Sedol started by apparently sacrificing some stones starting at move 40 and going through 48 then he played 78 and cashed in. From the outside those moves looked weak, pro commentators thought the game was lost until move 78.
Slee
Long interview with the co-founder of Deep Mind, Demis Hassabis. He talks a bit about his work in gaming; he was a lead AI programmer for Black and White.
I would imagine a game like Civilization would be a vastly bigger challenge for an AI compared to Go. Someone should create a standard map and rule set which AI researchers could work on.
What do you mean by “a game like Civilization”? If you’re talking a multiplayer game between multiple intelligent (natural or artificial) opponents, then that all comes down to diplomacy: If two players agree they don’t want a third player to win, then that third player is going to have a very hard time. If two players want to guarantee a loss to each other, then they’re both likely to get it, and so on. With any AI short of passing a Turing test, how well an AI does in such a game will be dominated by how the other players feel about the prospect of an AI winning.
On the other hand, if you’re talking about a single-player game, with a single intelligent player up against a number of dumb computer-controlled opponents, and comparing the case where that single intelligent player is human vs. where it’s an AI, then I imagine that the computer would do pretty well. Success at Civilization in that mode is more about being meticulous about micro-management than it is about cleverness. Yeah, there would still be some work to be done in implementing all that humans know about how to play the game, more work than most programmers would consider worthwhile for such a relatively niche game, but it would be a lot less work than chess or go.
Game 5 in progress. Very exciting fight. Lee Sedol in time trouble again, in his last period of byo-yomi, with AlphaGo having still about 4:30 minutes in its clock.
And this is a nice touch… From the MBC news twitter (in Korean), we learn that (translated, obviously):
AlphaGo, the AI that beat Lee Se Dol 9-dan, is now known as “AlphaGo 9-dan”. Hong Suk Hyun, the president of Korean Baduk Association, will be conferring the honorary title during the “Google Deepmind Challenge Match” Award Ceremony that will take place on the 15th.
The honorary dan degree is given to those who display high proficiency and merit in the game of Go and have not graduated from the official teaching program (for professional players), or have not passed through the standard tournaments (for amateur players).
The game, and the match, is over. AlphaGo wins the last game of 5.
Final result: AlphaGo 4 - Lee Sedol 1
All games were ended by resignation. The commenters said “these are games that will be studied for years to come”.
Tremendous return to form on the part of AlphaGo. Definitely the computer plays better with white (going second) than with black, interestingly enough.
Well it’s interesting to speculate.
One thing to note with chess, is that Deep Blue’s win over Kasparov was somewhat controversial. Even with very powerful computers, you have to use certain heuristics to cut down the search space breadth or depth. Prior to Deep Blue a number of strong engines had been developed, but they all tended to eventually get routed once humans noticed patterns in their play.
Kasparov never had the luxury of seeing any of Deep Blue’s past games, and the machine was disassembled immediately after the match. Hence why some still debated for a while whether AI really had bested humans at chess.
Nowadays this is all academic, as no-one would dispute the most powerful chess engines can obliterate the best human players, and we can’t really perceive patterns in their play any more.
But it does imply to me that a very strong neural net could get the better of a brute force engine. Because it’s learning new patterns and the brute force engine is not (not in a persistent way in any case).
It’s also exciting to think that we could now progress in our understanding of games like Go, and other domains, beyond human insight. What I mean is: chess computers have taught us a lot about chess strategy even though they may not be so clear on why a particular move is good (a forcing move to win material is one thing, but it can be hard to decipher exactly why a particular structural move is so optimal).
But OTOH, with an AI that works on learning patterns, we conceivably could pull out new heuristics that no human had realized.
The bigger issue I’ve seen raised was about the pacing of the games. Computers don’t get tired, but humans do, and the second Blue-vs.-Kasparov match had much less time for rest than is usual in a chess match.
The core of the problem was that the second match was sponsored by IBM themselves, and so they set all of the parameters, and had an incentive to stack the deck in the computer’s favor as much as possible.
To start off with I would use a very small map for two players. Have only domination victory. Civ 5 would be good because you could have city states for some additional diplomacy without having other players.
Even this very simplified setup I suspect would be very difficult for AI against the best human players. First of all even a very simplified Civ is many orders of magnitude more complicated than Go. The map is larger, you have heterogenous terrain, heterogenous units which you can choose to create and fairly complicated tactics between units. I think AI’s would struggle with the long-term planning for a war, you have to choose the optimal moment to strike, target the key techs,build the right mix of units, pick the best cities to target, exploit the terrain to the maximum.
An article in New Scientist says that this win is upsetting to South Koreans.
I’m not sure if the entire article can be read without a subscription.
I think it’s really funny that he was so confident of winning. I guess his mouth was writing checks his brain could not cash.
In his place I would also have been confident of winning. I am certain he analysed in detail the games AlphaGo played earlier against the European champion. Looking at them you can see that, at that point, AlphaGo was playing way below the level of a professional 9-dan.
His mistake (and mine, as well – before the match began I was sure that Lee Sedol would win it) was in seriously underestimating the capacity of the AI to train itself and increase its expertise by the equivalent of at least 5 professional dan levels in a few months.
Basically, no human can do that. And the way in which AlphaGo was able to learn and improve itself was way better than many of us expected. I have the feeling that Lee Sedol tended to think of AlphaGo as a machine being “fed” expertise and know-how directly by its human programmers (which is an inherently slower way of doing things) instead of as a system that was learning by itself much faster than many people thought possible.
(Let us not forget that, until now, all Go-playing programs had crashed and burned spectacularly when playing against truly high-level masters).
So, Lee Sedol displayed some arrogance before the match, but to a point it is understandable and even excusable. He had reasons to feel confident. In the end he was wrong to feel so. But his attitude was not just unchecked arrogance.
I am sure that right now he is kicking himself for speaking so carelessly before the match, though.
I thought the earlier version also beat a highly ranked player 5-0?
AlphaGo beat Fan Hui, a 2 dan. The ranks in Go are (weakest to strongest) 15 kyu to 1 kyu then 1 dan to 9 dan for amateurs. For Pros it is 1 dan to 9 dan. So AlphaGo beat a low rank dan. The rating numbers are like this, 2200 EGF which is similar to ELO in chess for a 2 dan vs. 2940 for a 9 dan. Though the rankings get kinda wacky with different governing bodies. Link.
Slee
Level 2 guy was the Euro champion that lost to AlphaGo 5-0?
Also he probably did not realize that they almost certainly ran simulations of games to improve the learning. And they can run those simulations 24/7 on super fast systems, even faster than what they used to play him. That’s how the system got so much better so fast.
Yes, the European Champion was a 2 dan professional. Lee Sedol is 9 dan professional (the highest rank).
And, as I mentioned earlier in the thread…
[QUOTE=JoseB]
His mistake (and mine, as well – before the match began I was sure that Lee Sedol would win it) was in seriously underestimating the capacity of the AI to train itself and increase its expertise by the equivalent of at least 5 professional dan levels in a few months.
Basically, no human can do that. And the way in which AlphaGo was able to learn and improve itself was way better than many of us expected. I have the feeling that Lee Sedol tended to think of AlphaGo as a machine being “fed” expertise and know-how directly by its human programmers (which is an inherently slower way of doing things) instead of as a system that was learning by itself much faster than many people thought possible.
[/QUOTE]
Sounds like he was expecting a high school level BB team but what he got was an all time NBA all star team with players like Jordan, Magic, Bird, Jabbar, Barkley, Curry, etc.
I guess the next guy will know what he’s up against.
And of course now AlphaGo has been awarded an honorary 9th dan.
With that phrasing, you can see why this may have rattled Korean society somewhat. It’s like giving your employee of the month award to an inanimate carbon rod.
Some of you think that this has huge implications. I’m interesting in hearing what exactly these implications are.
As early as my PCjr salad days, I had plenty of evidence that computers were eventually going to become nigh-unbeatable in chess. The evidence being that I never once got a single goddam win against any goddam computer program on any difficulty setting ever ever EVER EVEEERRRRRRR (and believe me, I tried). The fact is, whenever both sides start on equal footing, on a playing field with a finite number of positions and possibilities, and are required to proceed at the exact same rate, of course an emotionless machine with a lightning-fast processing speed is going to have the advantage. Oh, sure, there were a few kinks to be ironed out with endgame play and advanced openings and whatnot, but the age in which top masters would be dropping like flies was never seriously in doubt.
This was pretty much why I utterly loathed the fact that board games were required for an accomplishment in Assassin’s Creed 3. The moment I saw that, I was like, are you flippin’ kidding me?? There’s no bloody way in hell I’m taking one ha’penny off of ANYONE! Oh, I tried. Yeah, just had to play the fool for old time’s sake, I suppose. You can guess the result. The absolute, utter impotence I felt (and this is Connor we’re talking about, y’know, the guy who can sneak up on a rabbit, win a fight against a bear, and mow down ten guards without getting his shoes dirty) was the main reason I dumped that game in the trash.
So while mastering go is undoubtedly a step forward, I can’t say I’m all that surprised. Programmers simply took a field where computers already had enough firepower to take down Godzilla, Rodan, or Mothra but not all three at once and strapped on enough nuclear missile launchers to make up the difference.
So what now? My guess is that the video game industry will take notice and try to apply these advancements to their own strategy games, but the likes of Warcraft or Mobile Strike is a far richer and more complex beast than a flat board and identical pieces. Then there is the issue of whether they even want nigh-unbeatable opponents…is the “hardcore” market really big enough to support such a thing? What about other applications, can success at board games translate to robotics, or military intelligence, or other practical applications?
The key point here, I think, is that not only has a computer beat a human at Go, but it was a computer that wasn’t programmed to play Go. It was instead programmed to be able to learn, and then learned how to play Go. If it can do that, then what else can it learn? And what, in turn, can it teach us?