Help. I suck at chess

I don’t remember anyone claiming that computers could never play chess at all, since encoding the basic rules of play isn’t that hard. But indeed, the claim that they could never play a good game persisted for quite a long time, with the definition of “good” constantly being revised upwards, in one of the classic examples of AI skeptics moving the goalposts.

One of the first chess programs, if not perhaps the first ever, was written for the IBM 704, a vacuum tube machine that was in its heyday in the second half of the 1950s. The 704 was the machine for which the venerable FORTRAN and LISP languages were first developed. But it wasn’t until 1967 that it became clear that computer chess was becoming a force to be reckoned with, when the PDP-6 MacHack program beat the AI skeptic and philosopher Richard Dreyfus. But again with the moving goalposts, the argument then became that computers could never play at a master level, and then never at a grandmaster level, until such time as of course they managed both. Then the argument became that this apparently didn’t demonstrate “real” intelligence – since a mere computer was doing it – despite the fact that the main impetus behind many chess program projects was precisely because the game was regarded as such a challenge involving some specialized high level of human skill.

I know you know all this, Trinopus, better than I, but I’m just rambling for the edification of those who may not.

A big part of looking ahead is knowing what to look for. You need to understand the opening you are playing so you understand what kind of position you are trying to attain. You need to understand the defense that your opponent is playing and the position he is trying to attain.

Once the opening phase of the game is complete, you must have an understanding of the strategic goals you are trying to reach based on the position you have.

As our chess teacher mentioned, end game play is vital because, when relatively equal players play a match, the end game is very often the difference.

I’m extremely sceptical of that claim. Surely that applies to poker, not chess?

Kasparov is a genius.
He speaks several languages and writes highly-praised chess books.
He gave a 4 hour chess master-class to my chess scholars which was completely riveting - and then gave a History lecture at my school which the History Department described as “enthralling and unforgettable.”
He works incredibly hard (he used to take a month off every year just to study openings and his opponent’s games.)
If you held an Internet tournament (where you couldn’t see your opponents), I’d back him!

Understanding your opponent is important because chess players have definitive styles which lend themselves to certain kinds of openings and positions. For example, some players like quiet, positional situations, and some like wide open, aggressive play.

I always showed best with the latter. I am definitely an E4 player. If you want to stifle and frustrate me, play C4 leading into the very bland, positional English Opening.

I completely agree that applies at beginner and club level.
However things change at national and international level!

I remember being paired (with White :grinning: ) against a Grandmaster in the British Championship, and looking up his 100 most recent games on a chess database. :nerd_face:
Against my preferred 1.e4 he played four different openings equally often. Two were positional and two tactical. :fearful:

Also I had the pleasure of watching GM Morozevich play for my club team in the UK National League. :sunglasses:
In the first round he played an International Master and slaughtered him in a sharp variation of the Sicilian (sacrificing two pieces for a mating attack.)
In the second round he was Black against a fellow GM. It was a quiet variation of the Caro-Kann and soon reached what I though was a drawn ending (equal pawns, bishop v knight.) Morozevich ground out a win in 60+ moves. :heart_eyes:

In my view, it’s of limited value, but better than not playing at all. The issue is that, assuming you play like most people, you set it to think for 5-10 seconds, and it bashes out a vaguely sensible move. But the move doesn’t have a real point or plan behind it. Then the human eventually makes a dumb mistake, drops a piece, and goes on to lose due to the material deficit. All you really learn is that the computer calculates better than you, which you already knew. You can handicap the program, but then it just occasionally makes what it considers to be the 2nd (or 3rd, or 4th…) best move, and often those inferior moves are just completely nonsensical (For example, I had an engine move a black knight on f6 back to g8 for no useful purpose, just because its programming said it was time for it to make a mistake.).

In general, a better use of computer engines is to quickly blunder check your games, to see if you made any obvious errors.

As for how to get better, practice like anything else. Play a lot, collect all your losses (export to PGN), and figure out what specifically caused you to lose that particular game (knight fork… AGAIN…). Look for patterns in your losses, fix those weakness, uncover new weaknesses to fix. If you aren’t sure why you lost, show your game to a player who is much better than you and ask him/her to tell you why you lost. Most chess players are typically happy to show you how much better they are than you :slight_smile:

I’m unqualified to speak to Kasparov specifically, but this is certainly true in the general case.

And has led to no end of No True Scotsman arguments and fallacies about computer chess, AI chess, and AI in general.

In some tellings popular back in the day, “chess” wasn’t a board, 32 pieces, and some rules. But rather it was some mystical half-magical thing that took place in mind-space. Computers, lacking mind-space, by definition couldn’t “play chess” no matter how effectively they moved the pieces on the board according to the rules to defeat human players.

Here’s a quote from the other current chess thread:

Simplifying a bunch, GM Nunn’s comment that “Nobody rated below 2400 understands chess.” says he sees a divide there between uncomprehending piece-movers and understanding chess players.

Well that’s my quote…and I’m afraid I disagree completely with your interpretation.
I don’t expect you meant it, but it came across as insulting. :rage:

‘Uncomprehending piece movers’ only applies to beginners.

There are various stages in chess:

  • beginners
  • weak club players
  • good club players
  • regional players
  • national players
  • FIDE Masters
  • International Masters
  • Grandmasters
  • World class

I reached FIDE Master (and a rating of 2390) and I can assure you I knew a whole lot more about chess than being an ‘uncomprehending piece-mover’.
By that time I’d spent well over 10,000 hours studying and practising chess.
I was an English National Chess Coach and a full-time chess teacher.
I could give simultaneous displays to good club players and score 90%+

Grandmaster Nunn’s point was that, even at my level, there was more to chess than I knew.

I apologize if you were offended. That wasn’t my intent. Though I can see how you could reasonably take it that way.

The real idea I had was that the nature of the game played at each of the several levels you carefully lay out is different. At 2390 you play an amazing top tiny percentile game. And yet there’s even more room for knowledge and understanding and skill at the top as Nunn suggests and as you agreed to after some consideration.

This was meant to be connected to the idea that how a computer played chess, particularly in the early days, was also seen as a different nature.

And in that way the No True Scotsman could be (and often was) easily applied to those early programs.

i.e. However often a computer wins at chess it isn’t “playing chess” like a human “plays chess” was the argument then. And similarly in my take, however often an, e.g., skilled club player wins at chess, they aren’t “playing chess” in the same way a GM “plays chess”. Which seemed to me to be one facet of what Nunn was saying.

I may have overegged the custard here a bit. But I assure you I wasn’t trying to be hurtful. It was aimed at talking about claims made against early computer programs. And the vast, perhaps inexhaustible, complexity of chess.

My thinking was that our chess rookie OP is never going to come remotely close to that level of play, so my advice was tailored to that. What you say, though, is very true.

An example is the methodology that served me well at the club level, and that was playing the French Defense against E4. Most players at that level are very familiar and booked up with the Sicilian Defense. Many have never even faced the French. It wasn’t uncommon for me to have players in deep thought right off the bat. A Grandmaster would snicker and then annihilate me. LOL

No problem - thanks for the apology.

I think by the time a player reaches National level that they understand chess pretty well.
After that each level is more a question of ‘refinement’ e.g.:

  • studying and understanding opening to greater depth
  • knowing more about endings
  • seeing a couple of moves further ahead
  • knowing more patterns

I’ve already quoted a Grandmaster who knew four different replies to 1. e4 in depth.

I remember an enjoyable car journey back from the British Championships with two Grandmasters. We discussed Rook and pawn endings with just a few pieces left (say 8 or less.) Many of these positions come down to lengthy manouvers - and the Grandmasters both knew more and could see further than me.

I also recall another Grandmaster telling me that part of his studies of the Ruy Lopez opening involved whether White played Nbd2 or Nba3. He looked at hundreds of games between top players to see how the pawn structure affected the choice. :nerd_face:

I think that strong club players would be prepared for all these replies to 1. e4:

e5 / e6 / c5 / c6 / d6 / d5 / Nf6.

In a Dutch tournament for players of 2200+, I was leading with 7.5 / 8. :heart_eyes:
My last round opponent was an elderly gentleman who was also blind.
I played the French against him, thinking I could have a shortish draw … and start celebrating 1st place.
He crushed me in 25 moves. :fearful:
We analysed the game and then he asked how old I was.
“27.”
“Ah” he replied. “You’re too young to play the French!” :nerd_face:

Grandmasters don’t snicker diring games! :wink:

I get the very distinct impression that you’ve hung around and played against players of the strength that I’ve only read about in chess magazines, and I’m sure your “club play” has been at a level I’ve never experienced. I get the feeling that my French Defense technique was successful for me only because I was playing people who knew even less than me. LOL

I’m probably not qualified to judge for myself, but the bit about Kasparov being especially good at reading opponents was something I read in an article somewhere, and what I’ve seen of him is certainly consistent with that. Which is not to say that he wasn’t also good at all of the other skills involved in chess: You don’t get to be World Champion without being very good at all of the skills. His psychological skills were just what put him over the top.

As for the moving AI goalposts, at the time of the first Kasparov-Deep Blue match, the criticism was that the computer “didn’t even know that it was playing chess”. One of the IBM engineers fielded that question, and granted that it was true, but that it would be fairly easy to program the computer to recognize a variety of tasks, and to then identify playing chess as the one that it was currently engaged in, and that the only reason it hadn’t been programmed to do that was because nobody cared.

Yes, I was a very keen amateur player from 13 onwards (when I first joined a club.) I reached 2200 ELO rating level 6 years later.
Having a full-time job in computing meant I never went above 2390 ELO, though I did achieve FIDE Master.
My club team eventually won the London League, the UK National League and played in the European Club Championship.
In my holidays I travelled around Europe playing in international tournaments.
I was always a part-time chess coach, but was lucky enough to go full-time for many years before I retired.
Chess has been very good for me - I’ve met Kasparov, Steve Davis (World Snooker Champion) and appeared on Derren Browne’s TV magic show.

These chess threads have been fascinating. My elementary school had an afterschool chess class but all that they did was show us how the pieces move that that we had to capture the King. Then they set out a bunch of chess boards and let us “play”. We were not well served.

I don’t have the patience or visualization skills to have been anything close to a successful player but it would have been nice to understand more about the nuances of the game, like to see how some of the more famous games played out and why they were brilliantly played. I love how the game has had the same rules for centuries and people are still finding new stategies.

How do you get to be named to the various master levels?

How good are chess programs at being able to dial in a difficulty level, and how comparable would the actual result be to how real humans at a similar difficulty level would play?

I can imagine it might just be telling the computer to only look X moves ahead, or to spend only X amount of time thinking, or even programming in random chances of mistakes. (The latter is how tic-tac-toe games tend to be programmed. I suspect other solved games would have to do this, too.) I could also see them being restricted on openings, or given “personalities” that would favor certain moves over others out of “instinct.”

I could envision actual AI learning techniques used to try and match human skill at lower levels, but I doubt that has really been attempted. Everyone’s more interested in having their AI do as good (or better) than the best humans. But it would be an interesting problem to try and train for perfectly lifelike games, doing it more like a chess-only Turing test where human opponents wouldn’t know if they were playing a human or computer.

The way it’s done differs by program, of course. Older programs like MacHack let you explicitly specify the depth and width of the search tree. Its tournament mode, for instance, was typically set at 15, 15, 9, 9, 7. The five numbers meant it looked at least five moves ahead, and the value of each number was the width of the search tree at each ply. The search width was mitigated by a set of heuristics that would sometimes indicate a need to search more broadly, so except for special conditions they generally represented minima. Of course the skill with which “best moves” were evaluated and the way the search tree was pruned was among a great many factors that made one program better than another at the same look-ahead setting.

Other programs set the skill level differently. Shredder for Android lets you input a desired ELO rating directly, and figures out its play parameters from that. In that regard I have to say that while Shredder is an excellent program, and changing the ELO rating dramatically changes its playing skill, I think these numbers are a very poor reflection of real-world human playing skill. In particular, I think it greatly overrates its own ELO performance as well as the rating it assigns to its human opponent. It currently has me rated at something just over 1700, for example, which is really quite silly. I know that I currently play at a novice level, and in reality I’m probably below 1000 ELO.

This sounds like just another silly attempt to disparage AI. A capable chess program doesn’t “bash out” a move any more or less than a human player, and claiming that it does so via mere “calculations” is extremely misleading. It’s suggestive of the kinds of claims made in the early days of AI that led to the conclusion that a computer would never be able to play really good chess, because it’s just a “calculator” and the number of permutations of possible chess moves quickly becomes astronomical. But today chess programs routinely win against grandmasters. They don’t do this by “bashing out a vaguely sensible move”, they do it by making extremely good moves, and by executing strategies that outwit those of accomplished world-class players. The minutiae of the computing substrate is just as irrelevant as the minutiae of the neural firings of a human player: if the computer makes a series of moves that are logically coherent and lead the opponent into a losing position, then it was executing a strategy by definition regardless of how it was achieved – a strategy that can be described, analyzed, and perhaps even learned from.

Beyond all that, chess programs can be excellent training tools, at least for beginners, as I pointed out above, by allowing them to try out variants of different strategies or responses to specific board positions and seeing how they work out.