I remember a few years back when Grandmaster Gary Kasparov was defeated by IBM’s Deep Blue. Has computer AI as it relates to chess improved significantly since that time? I would think that we might soon see the day when even the best human GM’s almost never defeat a good computer program. Years ago when I was “into” chess I had a computer program called “Fritz” which I ran on a 486 with 2 megabytes of RAM. I have often wondered how much better this program would run on my current 2.8 Ghz Pentium IV (yes it has that Hyper whatever technology that supposedly does something) with around 800 MB of RAM! Surely the best “supercomputers” of today are exponentially faster than even Deep Blue of a few years back. I even heard a physicist lecture on something termed “quantum computers” which will be many orders of magnitude faster than current computers. I wonder will people even bother to play chess once computers display absolute supremcy?
Its more a matter of software than hardware. The resources needed for a computer to match a human in chess are much more complex than just gigahertz alone. From an article on IBM research entitled, “The Making of a Chess Machine”
Wikipedia has an interesting summary article on the history of computer chess.
I didn’t check those links so they probably answer this better than I can. . .
“Deep Blue” itself was just a chess playing computer.
The chips and hardware were designed to perform specific tasks necessary to play chess. after the match, the computer was basically disassembled.
Now, Kasparov is playing computers that have like 4 chips with “off-the-shelf” software. I don’t know if the software is exactly the same as what’s “off-the-shelf” but it’s close.
I think some of the positional understanding built into the software has been improved but there are still shortcomings, as a recent Kasparov game demonstrated (vs. Deep Junior).
Computers running something like Fritz are still very good, and I think they’re the best 1 minute chess. You can see this at a site like Internet Chess Club where many computers have rankings over 3000 at bullet and blitz.
As for quantum computers, it’s misleading to call them “fast”, without any qualifiers. Quantum computers, especially the early generation ones, are expected to take a fairly long time (by computer standards) to perform each operation, but they can perform a great many operations simultaneously. So if your problem has a large number of parts which can be performed simultaneously, then a quantum computer might be worthwhile, but if you need to do your steps one after another, then it’s not so great. That said, it’s conceivable that a quantum computer might someday be able to solve chess, which is to say, play a perfect game (this would probably mean that it would always win without fail when playing white).
However, the current state of the art in quantum computing is a machine which can factor the number 15. So don’t hold your breath.
The creation of a chess computer that can beat a grandmaster every game, or even win every match (6 games) is unlikely to happen soon.
There are two reasons I say this: the fact that processing power is lacking, and humans continue to get better at beating computers.
I’m no expert on the state of the art in chess computing by any means, but the mathematical nature of the problem is clear: in order to see just one more move deep, you need several times the computing power. Now, modern chess software tries to get around this problem by focusing only on the moves that really have a chance of success, but this method can in turn create weaknesses exploitable by humans. IF a grandmaster can see a weakness 10 moves deep that the computer is not programmed to see, even if it was just luck that the grandmaster noticed this particular move, then he is going to win.
Another thing about the 2nd Deep Blue vs. Kasparov match. Three games were drawn, DB won two, and Kasparov won one. Just thus much information is enough to know that Kasparov was by no means blown away by the computer.
But there is more. It became clear from later analysis that Kasparov gave up too soon in one of the matches; that he could have played to a draw. Now, of course, it was his fault that he didn’t notice this, but it’s something to think about.
One of the other games was truly controversial, however, and it pretty much came down to cheating on the part of the DB team (depending on how you look at it). In one of the games, Kasparaov either won or drew against DB when DB as black opted for the exchange version of the Ruy Lopez (this is generally thought to be a weak choice for black, but not an outright error). The DB programmers later changed DB’s programming so that DB avoided the exchange version and either won or drew the game.
IMO, the computer should enter the match with a set program and stick with it. I am not sure what the rules were, so it’s debatable whether this was cheating or not, but I and others have thought of this as playing dirty. It’s equivalent to a human nixing the computer’s move in the middle of a game because he knows it’s going to lead to trouble.
You also have to know that these computers are NOT merely computing moves based on the human player’s actions. No way! They start out with tremendous opening libraries, which they use without using AI at all. This is equivalent to a human playing with a book of openings–a big no-no. Computers really don’t start thinking until the game goes “out of book.” Also, the software is specially designed to beat the grandmaster opponent it faces.
Aeschines has it right. DB, and all chess playing computers, work on a tree-pruning algorithm: They look a few moves ahead, generating all legal moves, and then ignore those paths which lead to the lower scores by the end of the scenario. And, as Aeschines said, looking just a single move deeper requires an order of magnitude more computation, simply because of all of the possible configurations you can generate from any nontrivial configuration.
Anyway, the basic algorithm used by DB is embarrasingly parallel: That is, it can be sped up by many orders of magnitude if you use hardware capable of doing many things at once. This is because the generation and scoring of any given chain has nothing to do with the generation and scoring of any other chain. When you go to compare scores, you can compare them two at a time in parallel until you get down to the last chain standing, which you then implement.
That said, any non-parallel computer (and no, the Pentium doesn’t qualify as a parallel computer in this context, I don’t care what Intel says about its hyperthreading hardware) will do just about as well as any other non-parallel computer with the same algorithm. Neither RAM nor clock speed nor MIPS will make the computer a better player, only a faster one. And a 486 can probably make moves faster than you can think anyway.
So, essentially a group of people team up to beat a GM, collectively using a computer as a supplement and means to the end.
Come, now. Some of the algorithms (and, to a lesser extent, the hardware) are usable in other ways. Chess is simply a high-publicity, high-funding means to an end, like nuclear weapons were for high-energy physics.
And with chess, the only men who die are plastic.
I wasn’t knocking the purpose of the events (comp versus human). I love the whole spectacle.
It’s just that after reading about them it seems more like a bunch of people working against a GM, while the GM doesn’t get the same chance in preperation for the playing the computer and the computer’s programmers. There is a bit more mystery the GM has to overcome.
Be careful about this, computing power tends to go up embarrasingly fast and many people get caught with their pants down when they make this sort of proclamation. Besides, I could easily envision some sort of distributed, SETI like program that would easily look far further than any human could manage.
I think the consensus is that while comps get ‘faster’, the limitation is in the software design, and merely thinking ahead x number of steps farther doesn’t solve all the problems or inherently make the program and computer ‘better’.
The algorithms used in ches playing computers haven’t fuindamentally changed in decades. It’s just the speed of the computers that’s making the difference.
Perhaps I should have spell checked :smack:
Yes, this is what happens. The programmers will focus on the openings the GM tends to play, his playing style, etc. They will tailor the program for the match.
GMs do similar things: they get together with coaches and other chess players before a match and prepare heavily so as to beat the opponent. If the opponent is a computer, they will try to anticipate weaknesses and playing styles, develop strategies, etc.
So it’s really a team vs. a team. I do think, however, that the machine’s programming should not be changed once the match starts. In any event, the program is a mixture of AI and human intelligence. It starts off that way, but human intelligence should not be added once the match starts.
What do you mean by decades? 3 decades? How far back do you go to say the algorithms haven’t fundamentally changed?
And what do you mean by “algorithms” in this case? Do you mean their evaluation functions? Or do you mean how they do the alpha-beta pruning or what?
not recently. That was more true of the Deep Blue match, I think, where Kasparov got the past games from a slightly different machine.
In his most recent matches with Deep Junior, I think he had at least a years worth of games. He and his team spent a lot of time preparing for the specific software he was playing.
I’ve worked in the semiconductor industry for about 2 years now. Sure, processing power keeps going up. This isn’t really an issue for chess computing, however. It’s not as though they are only allowed one chip to do the computing with. They use multiple processors. They can fill a whole gymnasium with them if they choose.
There are 16 chess pieces on a side, and white has 20 legal moves his first turn and many, many more his second. You might go several moves before pieces are exchanged, thereby simplifying the situation.
So let’s just say that to look just 10 moves deep (a move in chess includes both white and black taking a turn), you will need to look at 20[sup]20[/sup] combinations. That comes out to about 1.05E+26. Now let’s be nice and say that each combo can be analyzed with just one floating point operation on the part of the machine. Assuming a teraflop (10[sup]12[/sup] floating point calculations) per second, that comes out to 3,325,000+ years!
A chess game is typically 1.5 hours for each side. Supposing that one chip could do a teraflop, in order to analyze 10 moves deep in 1.5 hours, you would need 19.4 billion chips. If each chip cost a penny, the total cost would be $194 million. If each chip took up a cubic milimeter, and they were packed perfectly close together, they would make a cube with side of over 19 m.
Now, for all I know, they are making chips that can do much better than a teraflop. But the above calculations should give you an idea of the scale of the problem. To avoid doing all these calculations, the programs are forced to use HUGE opening libraries. This is not AI–this is history combined with human analysis of the positions. A computer can play, for example, the Ruy Lopez for 20 moves without doing a single calculation. This saves LOTS of time, which, as per the above, is in short supply. This also prevents the computer from doing a major frick-up. At the same time, there is no way for the computer to explore new territory. The human GM can go “out of book” early in the game and force the computer to calculate. This is what Kasparov did in game 1 of match 2, and he won easily. As to why the GMs don’t do this every time, I’m not sure. Out of book openings tend to be weak against fellow GM opponents, so they are unlikely themselves to have studied them much.
Out of book openings tend to be weak because against GM opposition you generally have to have found an improvement that everybody has previously dismissed. Occasionally you can use an idea against one opponent which would be stronger against him than against other more suited to such positions that result - but generally unless it is a clear improvement then it will be refuted (ie the opposing GM will promptly demonstrate it was NOT an improvement).
Try finding a discovery in any non-artistic field every week or so. Even in World Championship matches it is rare to come with more than one or perhaps two genuine improvements on book prepared against your opponent.
Out of book every game? Forget it.
As others have observed style of play is the main thing in playing computers - and Kasparov’s style is not suited - he is too aggressive and too tactical. It is a credit to his ability that he can adapt his natural style to beat DB et al despite this problem. I think GMs will generally continue to have the upper hand or at least be able to fight the machines to a draw every time bar the occasional blunder.
BTW I think Fritz is currently on v8. It is more an analytical tool for preparing serious players now than a “game in a box”…
As Philster said, be careful saying this.
Kasparov himself has said there will come a day when getting a single draw against a computer in a 6 game match will be considered a victory. (He might have said a single “win”).
As others alluded to, the issue of speed isn’t necessarily key here. The growth of the number of moves grows so fast that even computers 100 times faster than what we have wouldn’t make a huge difference in long games. What it will do, is enable them to play, say, a 1-hour game at the level they use to play a 2 hour game. But when you get up into the reaches of 3.5 hour games (the time control for big matches), the effect of the extra speed is diminished.
Depth is kind of an odd thing. In GK’s last match with DJ, he had a laughable game in which the position told him that he would be able to gain heavy material or queen a pawn – even though it would be many moves away and he surely didn’t see the whole tactical picture. He just understood what would happen. THAT’s the key problem facing computers right now.
How do they get to recognize positional factors like that.
They already know things like “two bishops are better than a knight and a bishop”, moreso if the board is open. And when flaws are exposed (like Deep Blue not fully understanding about his trapped rook in a game from 94 or 95) the programmers find way to correct them. The corrections, now, are getting more and more subtle and maybe the next version understands something they didn’t understand in this one.
So, speed helps, but so does finding ways to make the computer understand the game more and more deeply.
[and, humans are not getting better at beating computers. some of the old-school ways of doing it are not so effective anymore. computers are making weird piece sacrifices, strongly positional moves, etc. the stuff they never used to do]
Having thought about it I wanted to add to my first comment. At the moment it is more of a software problem because the hardware needed support analyzing every possible move from the current situation to checkmate is not feasible.
However, it is a matter of fact that at some point in the future hardware will permit us to map out every possible move (2^64) in every possible game (10^450) of chess. At that point we are just going to need a whole bunch of petabytes to store the information, and at best, if you got the first move, it would be possible to win if you were able to also play a perfect game. But we all know thats not very feasible either