Some quick conclusions, while the AI is king on connect four a very visible math game with ~50, it still can’t come close to winning on Stratego that has ~150
Battleship is very much a game of chance. The AI will always be beatable on this game.
OK, you want to say that Stratego is a more difficult game to write an AI for than Connect 4? I think we can all agree on that. But “harder than a game that’s pretty easy” does not mean “impossible”.
JRagon, Starcraft AIs don’t beat humans in the same way that chess AIs do. They beat humans in a completely different way. And there are Starcraft AIs that can play complete games, from hatchery-and-four-drones all the way to destroying all enemy structures, and still consistently beat humans. Last I heard the top human players were still able to win sometimes (through superior strategic skills), but then again the last I heard was about six years ago.
Sure, any individual game the computer is beatable, but in long-term trends, I would not be surprised if the computer does win significantly above chance against human opponents. I can’t find any solid stats on this, though. That said, even games of pure chance like roshambo/rock-paper-scissors, the computer AI is good enough to win 60% of the time, significantly higher than mere chance would dictate, especially after a million trials.
^Actually, I should say games that theoretically should be games of pure chance. Humans clearly don’t play purely randomly, hence the AI’s better performance.
Neither Paper-Rock-Scissors nor Battleship is a random game. Both are instead psychological games, where you win by figuring out what your opponent was thinking. One would think that that sort of game would be especially difficult for an AI, except that it turns out that humans are surprisingly easy to read.
Of course, there’s an element of this in Stratego, too.
Depends on what’s meant by “random game”. The Nash Equilibrium for RPS is indeed a mixed strategy where each of RPS comes up 1/3 of the time.
Of course, you can do better against an opponent known to play suboptimally in a particular way. But that’s true of lots of games; generally when we talk of game strategy we assume players at least trying to play optimally.
But there’s a difference, at least for humans, between trying to play optimally and succeeding. Many humans know that random play is unbeatable in RPS. Accordingly, many humans attempt to play randomly. But humans suck at random, and so they can be beaten, either by more skilled humans or by computers.
Right, but that doesn’t really change the fundamentals of the game. Chess is a perfect information game: both players know the entire state of the game at all times. And yet beginning players don’t “see” the whole board; they focus too closely on local battles and forget about less obvious strategic positioning. Really beginning players may totally miss some attacks, say from a distant bishop, and lose pieces to really obvious blunders.
It’s possible for a more advanced player to exploit this lack of awareness for an advantage. Doesn’t mean chess isn’t a perfect information game, though.
I don’t recall this. I recall some teams (USC was the big one I believe) making a pretty good full game AI using some genetic algorithm strategy back in… 2012 maybe? But nothing as stellar as you’re talking about.
Given that I haven’t been a chess player since my not-short-enough stint in the Boy Scouts, which was a damn long time ago, my reactions to computers being completely unbeatable at chess: “Meh,” “Good, now we can finally stop thinking about it,” and “Wow, computers sure have come a long way.”
The thing about solved games is that in nearly all cases, hardly anybody ever bothers to explain HOW they’re solved, WHY it’s always possible to win/draw, WHAT the right move is for each position. Take Connect Four. Every discussion is full of the usual “I got this game figured out ages ago, it’s so easy, I’m totally awesome blah blah blah brag brag brag”, and yet the only concrete fact anyone ever brings up is that the first player can force a win by playing the middle column with the first move. (Consequently, if the first player doesn’t make this move, then the second player can force a win.) I managed to find a game solver, but all I’ve been able to discern are that you have to set up the lines and not give your opponent any unbeatable openings, hardly stuff I couldn’t have figured out on my own. The one time someone posted a proof, it was so crammed with jargon and convoluted explanations, it may as well have been written in Chinese.
And that’s the problem with judging the merit of a game by it’s “solvedness”. That even if every angle has been covered, there are a freaking lot of angles, and no human being is going to have the brainpower to memorize all of them. I remember this one time Randall Munroe took on tic tac toe, a game about as simplistic as it’s possible to get, and look at the sprawling mess it became (and it had errors!). Now imagine mapping out something with almost five times as many spaces. If computers have no trouble with it, that’s merely a testament as to how powerful modern computers are, and I don’t feel any reason to feel inferior about it any more than not being able to calculate pi to 100,000 digits.
On top of that, I’ve never in my entire life beaten a computer on any setting above “brain-damaged”, so I can’t lose what I don’t flippin’ have, y’know?
I would wager I’d do better than 60% vs. the top AI in rock-paper-scissors out of 100 tires, needing only to win 59, and would not be surprised if I ended up winning.
And speaking of AI, let’s talk about gambling for a moment. Baseball and Football are full of statistics.
Are there any studies on how AI has competed here? If it can win 61% of the time, I’d imagine the developers would eventually get rich!
I don’t know if it’s already been pointed out or not, so I’ll just dump this in:
there really aren't that many people, or games which they invent, which center on or are dependent on the idea that one achieves Godlike status by winning them over everyone else.
The point of playing MOST games, is personal development and/or fun.
I haven’t found myself that most people who play chess again and again, do it because they hope to “solve” the game, or even because they hope to become the best ever. Most of them play it repeatedly for the exercise of their strategic and reasoning skills.
In a game where both players are choosing randomly, neither side will win. RPS AIs learn from patterns to develop strategies. So what new strategy would you use against an AI to ensure that have a 60% winrate over a hundred rounds?
Here is an AI that is winning 60% of the time across a million rounds. Feel free to let us know what you do to beat it. You might also want to write the programmers to let them know how you’ve beaten their code.
As a highly experienced chess player, I was invited decades ago to try out the new computer database that cracked the forced win in the unusual (and previously thought ‘probably drawn’) King and two Bishops v King and Knight endgame.
On defence (i.e. playing the Knight), the database not only played the moves but told you how many (perfect) moves to a win.
I can assure you that this endgame is hard to pay, because there are few signs that you are making progress (and there are quite a few waiting moves during the process.)
So I got a starting position with the ‘moves to win’ counter at 48. I played several moves, successfully lowering the counter to 40 (I was trying to push the defending pieces back to the edge of the board.)
Then I couldn’t see a way to make progress. My next move put the counter up to 50! :smack:
OK, so the specific AI I was thinking of was the Berkeley Overmind, from 2010. It did play complete games, but I see on re-reading that I was misremembering its capabilities: They say that it was able to beat professional human players, but they stop short of saying that it could consistently beat them, which is another matter entirely.
Then again, though, that was 2010. I’m sure there’s been a lot more work on Starcraft AIs since then.
In response to DKW’s point, there are multiple levels of solvedness. On one end, you have chess. It’s known that, in chess, one of three things is true: Either white can force a win, black can force a win, or either side can force a draw. But nobody knows which of these is true, much less how to do it (it’s widely suspected that white can force a win, but not known for certain).
Next up is something like Hex, on a sufficiently-large board (like the 11x11 it’s usually played on). Here, it’s known for certain that, with the right strategy, player 1 can definitely force a win, and the proof is actually fairly simple. But still nobody knows what that strategy is.
Hex on a smaller board (up to 9x9) is in the next category. Here, computers have exhaustively played through all of the possibilities to find what that winning strategy is, but there’s no discernible pattern to that strategy. The only known way to store the strategy is by storing a huge database of possible moves and their counters.
Finally, you have something like Nim, where one player can always force victory (usually player 1, but on some initial setups player 2 wins), and there’s a simple known mathematical rule that can guide you on every turn of the game on what the correct move is.
I like the example of 2-move chess, which is a variant where each player makes 2 normal chess moves on their turn. It’s easy to prove that white can force at least a draw: he can play the nothing move Nc3+Nb1 to lose the move, so he has access to any strategy that black has. This proof doesn’t help with actually finding moves though.
I think the general view is that chess is probably a draw.