Has anyone predicted that computers will prove chess is determinate?

OK, as an experienced chess player I’m challenging the use of the phrase ‘known perfect strategies’ - which is not the same as ‘solved’ (by computer).

Yes (for example), all endgames where White has King, Queen and a couple of other pieces v the solitary Black King are ‘solved’. (They’re all trivial wins unless Black is stalemated.)
And there are ‘many’ of them.
But that has nothing to do with practical chess i.e. useful strategy.

Similarly there are many positions where one side has a forced mate (although as you say, a tiny proportion of all positions.)
I expect that side is easily winning without the mate - so my point is ‘where’s the known strategy’?

Consider the endings:

  • King, Bishop and Knight v King
  • King and two Bishops v King and Knight.

Assume no piece is in danger of immediate capture and that it’s not mate in a couple of moves.

The first ending and resulting strategy to win it has been known since at least 1750 (and I’ve taught it successfully to 10 year olds.)

The second has been fully documented by tablebases.
That does not mean that any human player knows the ‘perfect strategy’.
(As it happens, I was one of the first to try this ending a couple of decades ago, starting with a position with mate in 45. I managed to follow the win down to about 34 moves to mate, then blundered and it shot back up to 40! This went on for a while, until I gave up.)
Here’s an example:

White: King b2 Bishops g7,h7
Black: King b7 Knight b6

With White to move, this is a forced win in 45 moves.

  1. What is White’s ‘known perfect strategy’?
  2. Kc3 wins in 45 moves; Kb3 wins in 64. Explain why.

And of course, the real trick isn’t winning with KBN v. K or whatever; it’s in getting to a KBN v. K position in the first place.

Yes, it falls under combinatorial game theory. I would note that although some people might consider things as purely algorithmic problems, most succesful approaches treat things “mathematically” first and construct an algorithm/algorithms from that.

But surely the term CGT is too broad; as pointed out earlier, the wiki page lists pure brute-force exhaustive-search algorithmic solutions as examples, which is exactly that which I hoped to avoid in asking the thing I was curious about.

In contrast (for example, a totally off-the-top-of-my-head-wild-speculation-type of example), you could try something like:
Associate each piece with a position-dependent operator in Hilbert space with a discrete set of eigenvalues representing each possible legal move for a given position. Construct a hamiltonian Hw that is globally minimized by checkmate of black in the shortest number of moves, and another Hb that is globally minimized by checkmate of white in the shortest number of moves. Each of Hw and Hb time-evolves the position into a superposition of all possible legal games. The problem is reduced to a mathematical one of the minimization of a combination of operators Hw and Hb, f(Hw,Hb), that is minimized by each side winning as quickly as possible (or if not, drawing, and if not that, staying alive as long as possible).

Now that example may not be realistic (I don’t really know, to be honest), but it might give you an idea of how this sort of problem could conceivably be converted into a type of quantum mechanics or minimization problem that is found frequently in physics, one which theoretically is solvable analytically, although indeed in real life most hamiltonians are so complicated we use computers to arrive at approximate solutions. However in such a case the approximations available may be considerably more yielding than a the brute-force algorithmic approach.

Do you mean treat chess as a dynamic program?

This level of abstraction is done in the knight tour case.

Re: Converting chess into a “Mathematical problem”.

At the worst, you can convert a chess algorithm into a Diophantine Equation and then “solve” the equation.

This stuff about “algorithmic” vs. “Mathematical” way of solving things is purely illusionary. We’ve known since Gödel that there is no actual distinction.

This is interesting, although it’s not clear to me how it would be possible to find the corresponding Diophantine polynomial without already having solved the problem.

The advantages and disadvantages of different approaches to the same problem are hardly “illusionary,” whatever words you use to describe those two approaches, and regardless of any computational equivalence, or whatever it is you are referring to by referencing Godel (maybe you mean Turing?).

The difficulty wouldn’t be that the appropriate Diophantine equation would be hard to find; it’s that the equation is sufficiently complicated that the solutions would be hard to find. In principle, you could find the equation by just taking a program like the one septimus wrote in college and unraveling a few layers of computer abstraction around it, since after all, all that a computer running that program is doing is a bunch of arithmetic. But that wouldn’t help you to write a more practical chess program, since when you re-abstracted it, you’d just end up with the original program again (possibly in a more obfuscated form).

It sounds like you are agreeing with me – that in order to find the appropriate Diophantine equation you would have to solve the problem, ie run an exhaustive search like Septimus’ program you mention. I think you first sentence contradicts your second (though perhaps I’m misunderstanding you).

No, I think that in order to find the equation, all you’d need is to have an exhaustive program which would eventually find the perfect play. That’s easy, since septimus (among many other people) has already written such a program. Solving that equation once you’ve found it would be equivalent to finding the perfect play, and would require the same ludicrous amount of time as running the program.

I’m baffled how this is at odds with what I said.

There’s generating the equations and then there’s solving the equation. Generating them (i.e. writing them down) is easy. It is purely mechanical. Solving them is the “How many lifetimes of the universe are you willing to wait?” part.

Can you be more specific about what is involved in generating the equations? Chronos said “in order to find the equation, all you’d need is to have an exhaustive program which would eventually find the perfect play”, which is the same thing I said: “in order to find the appropriate Diophantine equation you would have to solve the problem, ie run an exhaustive search like Septimus’ program.” Maybe I’m missing the distinction between the two above quotes.

Generating the equations doesn’t require you to actually run the exhaustive search program; just to write it (and then “compile” it into equation language).

Solving the equations amounts to running the program.

Ah, OK. I didn’t get that from the way the others phrased it. Thanks for the clarification.

Its been argued go is much harder for computers to deal with than chess.

It is harder, simply by virtue of the larger search space.