Computer analysis on board games

Besides chess, have there been any interesting results when doing computer analysis on board games? I’m thinking games that weren’t solved before, so that one player could force a win or a draw. And especially if it has significantly affected how humans play the game.

The obvious other two would be 8x8 Draughts (Checkers for those in the US) and Go.

Go was once considered an almost impossible goal, but the latest AIs are pretty much unbeatable. Their playing style is unorthodox by human standards, but as human players all tend to study one another, its influence is likely going to be important.

Draughts/Checkers is a much simpler question, and the game has been essentially solved. Wikepedea has a useful list of solved games.

That’s no longer considered true. Recent research shows that Go engines can be defeated by playing completely unexpected moves that the engine was unlikely to have ever considered in its training data. There are probably ways of training the engines to mitigate this sort of adversarial attack, though I’m not aware of any research that’s actually tried it.

Is the question about AIs, though (which, by the way, can certainly teach humans new ways of thinking about many games)? Once we reduce to a combinatorial or mathematical problem, computers can certainly help (within reason). For Go there was a book published called Chilling Gets the Last Point which involves positions analogous to complicated endgames in Chess—at which point no human would be able to defeat the computer unless he or she has the advantage and plays it perfectly.

This isn’t even a strategy that the computer can’t play well against, it’s basically tricking the computer into forfeiting. I don’t know much about Go, but would simply programming the computer not to play a pass move have negative effects elsewhere?

I wonder if AlphaGo is susceptible to that. If not, then claiming computers can’t be beaten by humans might still be true.

[Moderating]
“Have computers solved any game that was not previously solved” is an objective question that could fit into FQ, but all the rest of the discussion is better suited for the Game Room. Moving.

This is strange. It looks more like a mismatch between the ruleset the AI trained on and the one used. The adversary’s strategy described in that article involves scattering stones inside the AI’s already established territory. The AI passes, which only makes sense if it’s playing under a ruleset that declares those stones dead at the end; the adversary passes, ending the game, at which point those stones neutralize the territory the AI thought safe. This could only happen if the ruleset actually being used was one that required playing out those positions and didn’t allow agreement on which stones were already dead. So this looks more like, “whoops, we changed the rules on you, forgot to tell you, too bad,” rather than an actual failure of the AI.

It’s unlikely that rulesets figured into the training at all. In most machine learning applications these days, the computer learns from examples alone (i.e., the record of moves in past games and which side won). The rest of the rules are all intuited by the machine.

Would those games then have included ones that ended with dead stones on the board that were removed? The ruleset doesn’t have to be explicit, but implicit in the examples used.

If that’s not a typical ending position for experienced players, then probably not, in which case the AI would not have learned how to effectively deal with such a game.

My understanding is that that is indeed common among experienced players. They understand quite well which stones are alive and which dead, and don’t want to waste their time playing out already decided positions.

The Board game Diplomacy.

There is someone (or a group) that has made an excellent AI for the game. Gunboat only (so no actually ‘diplomacy’) but the AI usually kicks peoples’ arses.

I remember watching a vid about it and the AI likes to build 2 fleets in the first year for Germany and overweights contesting Scandanavia…which is/was considered absolutely nuts…but has worked for the AI well enough that some people have started mimicking the strategy. The other one is its proclivity to stab (betray) more frequently.

I think it is more an aspect of Gunboat Diplomacy (no actual communication between players) but that is a common mode online for quick games. Personally, and I am not an expert, I am not so sure such strategies would work in a pure game where players can communicate…but it hasn’t been moved along to do that yet so who knows what it will come up with.

There is a bot that’s very good at Diplomacy with communication. Cicero.

There are different methods of scoring and rules variations which can, albeit rarely, lead to differing results as in who won the game! Therefore even humans, and of course computers, need to agree what they are before the match. It is not a big discrepancy, though, and, whatever rules are used, no experienced player is going to get very far without being able to figure out the final score.