There are a lot of guesses that a perfect strategy exists, usually favoring White.

But has anyone predicted it, bragged about setting computers to look for it?

Most endgame and a great many intermediate positions have known perfect strategies, so the problem is to force a new game to reach one of those.

Perhaps I’ll be accused of nitpicking or stating the obvious but …

It’s not a *guess* but a *certainty* that a perfect strategy exists. We just can’t know if perfection leads to Draw or White Win (or conceivably even Black Win).

This is not quite as trivially true as it might seem. Go [Japanese rules] could conceivably have “no result” with perfect play, due to positions like Triple Ko:

There’s no need to guess; as chess is a completely deterministic game it’s already been trivially proven that a perfect strategy exists.

No, no one who actually understands the problem has done this, because the search space is simply too large; there are more valid chessboard configurations than there are elementary particles in the universe. This is why the outcome of perfect play is known only for endgames, where the number of possible moves is manageable.

Related question: has anyone ever tried to turn the game of chess from an algorithmic problem into a mathematical problem? In other words, is it possible to encode the rules into an equation that simply needs to be solved or approximated using known mathematical tools? When thinking of “solving chess” my mind immediately leaps to the analogy with physics. You have pieces (particles) and some set of rules (physical laws) written in mathematical form, and you can ask and answer questions like “what is the shortest path (game) between position A (opening position) and B (mate)”…

I don’t think this is a meaningful distinction. Every algorithmic solution to chess is also a mathematical solution, even if it may not obviously fit your preconceived notions of a formula into which you plug numbers to arrive at a solution. (Some non-procedural computer languages, like Lisp, make the function-computing aspect of the problem more apparent, and chess programs have been written in Lisp.)

I can’t be sure how you are defining “meaningful” here, but there is certainly a distinction in a practical sense. The variety of dual mathematical descriptions in physics is one clear example. Some descriptions are more algorithmic (eg Newton’s laws: a set of rules are followed at each time step) while others not (eg principle of stationary action), and allow the subject a more general mathematical analysis. Both of the above examples are equivalent in their description of “the game”, however in the latter one solves for the trajectory by solving a simple equation in one swoop, while the former is a more brute-force approach that in general requires “following the rules” at each time step in order to discover what happens. The question is whether the rules of the game can be put in an algebraic form that, yes, is “equivalent” in an informational sense, but allows one to apply more standard mathematical techniques rather than only a computational brute-force approach. There are many examples in mathematics of algorithms that are equivalent to the solutions to algebraic equations (calculus vs Euler method, etc). Obviously in some cases solving the algebraic equation is more practical that the algorithmic solution.

For chess proper, I do not know (I guess not, given the complications in simpler cases), but, for chess-like problems, yes. Specifically, the knight tour (or questions about the knight tour), and, I imagine, other similar sorts of problems.

As far as I can tell, what you’re trying to get at is the distinction between a closed form or analytic solution and something more computational. That’s not something that I or my colleagues spend too much time worrying about, for the simple reason that a short algorithm is infinitely easier to understand and analyze than a three foot formula that we can’t wrap our heads around.

With regards to chess, I have seen some discussion about reducing the problem of finding a solution to the game to the problem of summing polynomials over large finite fields. I don’t know the details, but I’d be surprised if there’s an intelligible formula for the latter case.

A triples ko is similar to a stalemate. If you are playing white in go, it may be that forcing a triple ko is your best result. It seems obvious that both games are determinate and a perfect strategy exists. Finding it is another matter entirely.

If I were codifying the rules of go, I would simply ban repeated positions and thus make a triple ko equivalent to a single ko. When I was taught the ko rule, it was presented to me as outlawing a repeated position. It turns out that what is disallowed is an immediately repeated position.

I think I understand what you are asking, but the distinction you are trying to make here is not the same one that exists in your example: symbolic manipulation via calculus and numerical approximation via the Euler method are not equivalent. The latter is an example of a “numerical method”, an algorithm which uses calculation to arrive at an approximate, but not exact, solution. There already exist chess algorithms which, given enough time, will always arrive at the exact and correct solution. They fall very much into the “symbolic manipulation” category rather than the “numerical method” category. Brute force approaches exist in both methods.

The idea of expressing games in terms of equations which don’t necessarily rely on considering every possible game position to solve is the domain of combinatorial game theory (CGT). Unfortunately, for various reasons (see the introduction to the first volume of Conway et al.'s *Winning Ways for Your Mathematical Plays*), chess does not lend itself to CGT analysis.

Perhaps we are using different definitions of “equivalent”, but by mine (gives the same results), many methods of numerical computation are equivalent to their symbolic manipulation counterparts. In almost all cases (including the Euler method <–> calculus correspondence), this equivalency is for some class of problems (non chaotic, only one minima, might be requirements in some cases), and assuming infinite computation time (equivalent really to something as mundane as a dropping a “limit” in front when stating the equivalency). Though the assumption of infinite computation time (allowing arbitrarily small step sizes, and an infinite number of steps) may be impractical, nonetheless the equivalency exists, and the analogy is particularly apropos to the case of non-continuous “games” like chess where such an assumption is not even necessary for a similar equivalency to exist.

Interesting, this is what I was interested in (although it’s not clear from the wikipedia page that CGT is really concerned only with what you say; they give examples of CGT proofs that are just brute-force going through every possible game position (in spirit at least; things like alpha-beta pruning are not in the spirit of my question)).

Hope this isn’t a hijack, but…

I was just thinking about this over the last couple of days. I wondered if anyone has tried to describe the necessary algorithms for solving chess using a quantum computer.

Obviously it’s not a practical proposition to implement such an algorithm (or even make the computer) at this time. But I don’t know if it’s even possible in theory for a quantum computer to compute such problems.

Disappointingly, when I googled <quantum computer chess>, I got page after page of people “answering” this question by basically not appreciating the distinction between a quantum and classical computer (they all seemed to assume “quantum computer” just means “fast computer”).

So can quantum computers theoretically be tasked with complex min max problems?

Lisp is a purely procedural programming language. You’re probably thinking of Prolog.

“Purely” procedural? That’s the first time I’ve heard that claim. AFAIK while Lisp permits procedural programming, it (or more specifically certain of its dialects, such as Scheme) can be and often is used for purely functional programming.

I wasn’t thinking of Prolog. It’s another non-procedural language, but I don’t think its logical paradigm is particularly amenable for the sort of formula-based approach **iamnotbatman** was proposing.

As chess is clearly defined, it’s a popular subject for designers and programmers to have a go at.

(Search for chess playing programs such as ‘Deep Blue’ and ‘Hydra’, for example.)

As others have said, there is undoubtedly a perfect strategy.

Based on my experience, I confidently predict that chess is a draw with perfect play (and that Black has to be more careful than White i.e. White may be able to play a sub-optimal move or two and still draw.)

However I don’t know what you mean by ‘Most endgame and a great many intermediate positions have known perfect strategies’.

The only ‘perfection’ in chess at present is with positions with a total of 6 pieces or less*.

These have been completely analysed using ‘tablebases’. You can see a free tablebase here:

http://www.gmchess.com/endgames/

If you enter, for example, the position from one of my games:

White: King g5 pawn h4

Black (to move): King c3 Knight e3

you find that only … Nc4! draws.

*This is **not** most endgames and there are no intermediate positions (unless you count examples of say ‘White to play and mate in 3’, which are very few and far between.)

Not really. It doesn’t have the guarantees about side-effecting functions actual purely functional programming languages do, to begin with, and that does bad things to referential transparency.

You can write code in a functional style in Lisp if you don’t expect the compiler to help you enforce constraints. You can write that way in C, too, and in a number of the same ways. That’s my point: Lisp is more convenient than C, but it’s still fundamentally about shuffling data held in variables to and from operators that mutate it, as opposed to Haskell, which is about using constants to pipe together functions that (conceptually, at least) never mutate anything and simply map input values to output values. Heck, Haskell is so pure you need to introduce extra ideas over and above the core language to introduce the idea of *doing things in a specific sequence*.

I think it’s pretty clear what he meant. If you define an endgame as however many pieces left as we have tablebases for, then endgames are solved. I concede that that’s a very restrictive definition of “endgame,” so I would have said “many” rather than “most” endgames are solved.

And there are indeed a great many intermediate positions with forced mates (Fritz on my dinky PC can find forced mates of a dozen moves or more in intermediate positions), so those are solved, too. They are a very small percentage of the total number of intermediate positions, but still a great many by most standards. For example, if I had a penny for each of them, I’m pretty sure I’d be the richest man in the world.

It may seem like a silly, almost “metaphysical,” distinction but an unabandoned triple ko is not a draw – it’s more like a rained-out game in baseball. Note that with non-integral komi, draws are impossible in Go.

You’re not alone. Chinese rules and many other rule sets ban repeated positions.

Nevertheless, Japanese rules have traditional and sentimental significance and, AFAIK, are the rules still used in Japan.

See: Rules of Go - Wikipedia

(ETA: BTW there are interesting intricate repeating positions other than triple-ko.)

On a lighter note, the very first significant computer program I wrote as a college freshman would have solved Chess if run to completion. Instead, I needed to get the job time limit extended just to solve mates-in-two.

I think what **iamnotbatman** was referring to by a “mathematical solution” was something like what exists for Nim, where there’s a very simple algorithm which proceeds by a sort of counting, and which if implemented at every move has been proven to lead to perfect play, without needing to look ahead all the way to the end of the game.

And I think it quite likely that a custom-built quantum computer could brute-force a perfect game of chess: On each turn, superimpose all possible moves, and find some way to collapse down to only those move-trees that lead to checkmate. I don’t know enough about the practicalities of quantum computing to even be able to begin to devise algorithms, though-- It might be simpler to start with something like connect-four as a proof of concept.

This is very unlikely to really work. (Assume that the main computational complexity classes are distinct.) The class of problems efficiently solvable with a quantum computer (BQP) is a small subset of PSPACE. It probably doesn’t even include NP-Complete problems. Generalized chess (with no move limit) is EXPTIME-Complete. (Even if you stuck on a poly-time move limit, it would be only PSPACE-Complete.) So in order to really solve chess well with a quantum computer, you’d have to first prove that EXPTIME and PSPACE are the same, and then you still have a *lot* more work to go.

Quantum computing is not a magic panacea for everything that takes exponential time. It is really quite limiting.