Has Deep Blue gotten deeper in the last few years?

That’s an interesting fact, since it’s false. One may argue that a quantum computer in the future could play perfect chess.

One may not argue that all possible chess moves could be stored as data. I have read that if the entire universe were packed solid with protons, each of which performed (like a transistor) a calculation per second (or something along these lines), it would take until the end of the universe to calculate every possible chess move. Likewise, one could not store all the data without turning the entire universe into one big hard disk.

To Trunk–we are not in disagreement. I merely was pointing out that creating an unbeatable chess program is not a matter of brute force computational power.

If you can do the math i’ll buy it. Where did you read this?

I took the following “recent computer chess history” from a site http://www.geocities.com/SiliconValley/Lab/7378/comphis.htm. devoted to that issue:

*From January 26 to February 7, 2003, Kasparov played Deep Junior 7 in New York. The match ended in a draw. Kasparov won game 1. Deep Junior won game 3. The rest of the games (games 2, 4, 5, and 6) were drawn. This was the first time that a man/machine competition was sanctioned by FIDE, the World Chess Federation. Deep Junior took 10 years to program by Tel Aviv programmers Amir Ban and Shay Bushinksy. It can evaluate 3 million moves a second, and positions 15 moves deep.

On November 11-18, 2003, Kasparov played X3dFritz in New York. The match was tied 2-2. Fritz won the 2nd game. Kasparov won the 3rd game. Games 1 and 4 were drawn. It was the first official world chess championship in total virtual reality, played in 3-D.

The 11th World Computer Chess Championship was held in Graz from November 22 to November 30, 2003. It was won by Shredder after a play-off with Deep Fritz.

In 2003 the top chess computers were Shredder 7.04 (2810), Shredder 7.0 (2770), Fritz 8.0 (2762), Deep Fritz 7.0 (2761), Fritz 7.0 (2742), Shredder 6.0 (2724), and Chess Tiger 15.0 (2720).

The 12th World Computer Chess Championship will be held at Bar-llan University in Ramat-Gan, Israel from July 4 to July 12, 2004. *
This information along with the above discussion has caused me to ponder the following, follow up questions/ideas:

  1. I think “clock speed” may matter. I know that with the “old Fritz” computer game which I used to own letting the computer “think” (this was an option that could be turned on or off) while I moved, resulted in a better game from the computer. Thus if time is a factor (for the computer) wouldn’t clock speed of the processor be important?

  2. "Deep Junior took 10 years to program by Tel Aviv programmers Amir Ban and Shay Bushinksy. It can evaluate 3 million moves a second, and positions 15 moves deep. " Three million moves per second while fast, doesn’t seem that fast. I think there are desk top machines out there right now that can go considerably faster than this (however, pehaps a distinction should be made between raw processor speed, and being able to evaluate a move, since evaluating one move might entail many operations).

  3. Someone, above suggested a “distributed approach”. Didn’t SETI offer something like this a couple of years ago? I even “downloaded” a program from Art Bell’s website at the time which supposedly assisted in this process. Now with chess, it would be a little more difficult since the analysis is “real time”. However, this could be taken into account by limiting the processing “distribution” to those computers with high speed Internet access.

  4. I wonder if some computer programs (Shredder for instance) might not play better against other computers, while others such as Deep Junior do better against human opposition (on average). Thus, I wonder if Deep Junior beats (or ties) Kasparov, while Kasparov beats Shredder, even as Shredder tops Deep Junior? In addition, I wonder if so called “neural net” AI research has any implications for chess programs. If it does the “distributive” approach would seem to be ideal for advancing/testing this approach.

Yes, and bumblebees can’t fly and kangaroos can’t jump :rolleyes:. The assumption than the branch depth at every level is going to be 20 is not based on reality. It’s like saying that if you tried moving a pile of sand grain by grain from one place to another, then it will take 3 million years, therefore it’s impossible. Already, aggressive pruning algortihms are making the search space a lot more tractable than previously. Throw in a super-cluster of a few million PC’s using a simple distributed system and you can suddenly search 5- 7 moves further ahead which is by no means trivial.

For all of you Star Trek fans out there would the stated capabilities of “Data” have been suficient to always beat or tie Grandmasters like Karpov and Kasparov?

It depends a lot more on the software than the hardware. It would not be physically possible to compute a perfect game within the life of the universe so it all comes down to how good you are at avoiding pursuing crappy game moves.

I think it would probably mean that it would always draw without fail when playing white. The least material needed to win is a king and pawn against king, but the winning king must be ahead of the pawn and able to take the opposition on the sixth rank to advance the pawn. Otherwise, king and pawn against king is a draw.

It’s hard to say. Certainly, a perfect chess computer could always guarantee at least a draw when playing white. I said that I suspect that it could always win, because the conditions for a stalemate in chess are more esoteric than those for a checkmate. But as you point out, insufficient power is also a draw condition in chess, and one which might plausibly occur in perfect play (of course, for perfect play, we need not consider draws by mutual agreement). So I don’t know which is the case.

As for Aeschines’ calculations, ten moves is awfully deep. The original Deep Blue saw six moves deep, and that was enough to make it a major contender at the highest levels of play. And in principle, if you look deep enough, an “understanding” of positional value and the like will emerge naturally, though it’s hard to say how deep is “deep enough”.

As for memory requirements making a perfect classical compter unfeasible, not necessarily. If you have enough processor speed to waste (note: This would be one holy heck of a lot), and one looked at moves in some logical order, one could in principle perfectly calculate chess moves at a memory cost scaling with (and not much larger than) the depth to which you’re looking. Basically, this would consist of storing the move sequence for your suspected optimal path, and comparing each new path to that optimal as you explore it. You don’t need to store all your non-optimal paths, since you’re interested only in the best one.

I think it was Alekhine (after defeating Capablanca), after being asked how many moves ahead did he think, answered, “One move.” “One move!?” was the incredulous reply. “But Capablanca said he thinks 11 moves ahead. How were you able to beat him?”

“I always think of the best move.”

OK, a lot has been said about how computers have been programmed to play chess (and this ought to be well understood, as the approaches have, for the most part, been explicitly designed), but how does a human do it? How does the human mind organise the information required to play a really stunning game of chess? Is it possible to describe or quantify the processes?

Some aspects of positional chess have already been integrated into computer programs. I would bet that programs like Fritz 8.0 and Shredder have their 2700+ ratings due in large part to their “grasp” of positional elements. Most good chess players think in terms of :

  1. Development- All things being equal you want to have your pieces in positions where they control more spaces.

  2. Time- You want to develop faster than your opponent.

  3. Space- You want to control lots of it while at the same time cramping your opponent.

  4. Harmony- Your pieces should work together reinforcing one another where possible, and exerting a syncronous influence against your opponent.

Certain, principals can be programmed into a computer easier than others. Computers, even in the 1970’s “understood” that it was generally better to develop Knights before Bishops, and to put Rooks on open ranks when possible (cetarus paribus). The bottom line is that the best computers probably play better positonal chess than 99% of players out there. It’s just that they are trying to beat the top 99.9999% of chess players in opponents like Kasparov.

That reminds me of something Bobby Fischer said to Dick Cavett: “When I have the white pieces, I win because I’m White. When I have the black pieces, I win because I’m Fischer.”

This is not proven, and probably never will be, even though its very widely believed. There are correspondance chess players who haven’t lost a game with white in like 30 years.

BUT, it’s possible that in the opening position, white is in zugzwang. Yes, it’s doubtful. Very doubtful. I’m just being nitpicky here when I say that it possible that perfect play by both sides results in a black win.

True, 10 or 6 moves is deep. One of the strengths is that its deep, and ACCURATE. Humans, along what we call forced lines, can go much further than 6 moves. The thing is, the computers could to, but they don’t recognize the line as forced. They’re still calculating side variations that a human wouldn’t consider. This is a strength and a weakness.

As I mentioned before, in the last Kasparov match, he played a game against the computer where the position clearly showed that white would queen a pawn or win major material. But, it was like 20 moves away. The computer didn’t know it until it was like 8 moves away (this is from memory – since we’re not really debating I’m not digging up the logs). What will happen to avoid this catastrophe again is that a) they’ll learn not to go into the opening that led to that position, OR b) they’ll have a better positional understanding so they won’t go into that position.

The problem with a computer is that it doesn’t know the paths are non-optimal until it explores them to the end. So, yes, it does have to does have to explore the non-optimal paths.

We have completely solved all games with 5 pieces on board (that includes the kings, so really with 3 pieces on board).

We’re not even close to 6. These problems grow insanely complex.

Anyway, to hand it off to people who know more than anyone in this thread, here’s one link to rec.games.chess.computer. Much more can be found there to anyone really interested.

I stand corrected on the white advantage. I thought I had put in all necessary weasling, but you’re correct, it’s not proven that white has the advantage in perfect play.

And I acknowledge that every line would need to be explored in perfect play (unless we find some miracle algorithm like that for Nim, allowing perfect play without exploration), but you don’t need to store every line, only the best-so-far and the current one. I’m assuming unlimited depth here (that is, keep looking until you reach mate), and unlimited processing power (but not unlimited memory).

Some things that have not been mentioned are instinct and experience, and no machine will ever acquire that, unless you get into AI, which would encompass experience. Sacrifices are often based on positional considerations and instinct, without the ability to calculate whether they are sound or not, something a machine would never do. Just to give two common examples: A rook may be sac’d for a N (and possibly a pawn also and a rook may be sac’d for a bishop. In the former example, which theme is common in many variations of the Sicilian Defense, Black makes this sac to control the center. In many cases, Black may not be able to calculate the many moves ahead with all their variations to justify the sac, but instinct and experience tells him it’s the right move. In the latter case, a side may allow his/her rook to be taken by a finachettoed bishop, if the bishop was finachettoed on the kingside and he/she still has the bishop of the bishop that was sac’d. The one who sacrificises realizes that he will have good play against the squares weakened by the absence of that bishop. This is a common theme in some variations of the Gruenfeld Defense.

Those types of sacrifices are rarely given a Brilliancy Price. Those are reserved for truly great sacrifices, one involving much more material. One of the most famous of these was played by Paul Morphy, IIRC, when he sac’d practically all of his material to deliver a rook and bishop mate, and was given a Brilliancy Prize (the so-called “Immortal Game.”) Years later, computer analysis shows that there was a flaw in his combination. But it took a computer to find it. Many combos may have flaws, but to find them over-the-board by mere mortals is not easy.

So, to reiterate GM’s rely a great deal on instinct and experience, since some combos have so many variations that no mortal can analyze all of them very deep over-the-board. Attacking players are more likely to do this; whereas, defensive players will seldom sac without a clear path to an advantage. Tal was one of the greatest attacking players in modern times. His sac’s were often not sound, but he won the greater percentage of them, as he was not playing a computer.

Hate to burst your bubble, but chess programs are AI. (Full disclosure: I’m a software engineer and a chess player, and have studied AI to some extend.) I think you’re confusing AI with artificial sentience.

Refining the opening book, fixing bugs, and adjusting evaluation heuristics for the program is experience. I’m not convinced that instinct on the chess board is more than unconscious patter-matching (which is just another term for experience).

Certainly experience is a big factor in intuition, but I don’t think it’s the only factor. Other factors include the position as a whole and the dynamics of the board. Sacrificing the exchange to obtain the center, and possibly working on the weakened square of the captured bishop, are some of the dynamics and positional considerations. A computer would never do that, unless it can “see” a clear advantage. Many GM’s will do it on the basis that “it’s got to be correct.” Or, sometimes, if they are playing for a win, to unbalance the board.

Top players also use pyschology. A GM may come up with a technical novelty (TN) against a certain opponent to either take him by surprise or to get him out of a type of position he likes. For example, Kasparov against 3-D Fritz, played a closed position which has been considered inferior, but not the type of position computers excel at. Fritz wound up making meaningless, time-consuming moves, and lost.