If the two players played a textbook game (i.e., a game that’s actually in a book) the supergenius might have a shot. But their lack of experience won’t help when the grandmaster veers off from trying to control the center and attacks up the b and c rows.
According to the Oxford Companion to Chess there are 1,327 possible openings, and by the time you get to ten moves or so the number exceeds 100,000.
OK, but are there chess computers that have neither played millions of games nor have been given heuristics?
I mean, even knowing, say, that a rook is worth more than a knight all else being equal, is a very important input for a brute force algorithm, certainly if said algorithm had to figure that out from first principles while its chess clock is ticking.
In a sense the grandmaster is also engaging in a brute force algorithm, because she has the chess knowledge built from thousands of human games, plus, in the modern era, the findings of chess computer games.
Of course we could argue that if the supergenius can calculate from start to mate then it’s irrelevant whether it figures out things like piece values along the way. But this isn’t very realistic of course.
No, because there are many chess principles that have to be learned before one can win at high level of chess. In fact, I think the Grandmaster would win rather easily. The genius, however, would be extremely trainable and would have an excellent opportunity to achieve success at a high level. There is no guarantee, though, because there is no correlation between being extremely gifted in one particular skill and overall intelligence.
DeepMind AlphaZero has revolutionized the approach to computer chess. Much as the parameters in the OP, DeepMind was programmed with the rules of chess only and not a single mapped game position. It’s job was to teach itself how to play winning chess through trial and error.
For a human to take that approach would be absurd and a failure, but for a super computer that can analyze 15,000,000,000 board positions all at once, the computer learned winning chess at a fantastic rate. It was not encumbered by millions of pre-mapped positions and opening routines and was not prejudiced into favoring certain positions just because humans decided they were superior positions in advance. It literally trained itself.
I’m a retired chess teacher and my highest ELO rating was 2390.
If you’re discussing highly intelligent humans playing chess, then they would need a lot of practice to beat a grandmaster. (if they didn’t have the required patience and dedication, they never would. )
Grandmasters analyse with the aid of many patterns based on experience (for example spotting tactics or exchanging into a winning endgame.)
They also understand several openings (not memorising the moves - a common misconception.)
This means they can calculate much faster (and more accurately) than your average player.
Incidentally I’m confident there is a link between intelligence and grandmasters - for example at least half of the English teams in the 1970s and 1980s had been to Cambridge or Oxford (the top two UK Universities.)
Now Computers play chess differently from us carbon-based life-forms, using their amazing storage and speed to analyse way more positions than any human ever could.
One way to play chess perfectly is to analyse every possible position with a small number of pieces, sort the positions into a database and then simply announce ‘mate in x moves’ or ‘drawn position’.
Using the above, you can type in any position with a total of 7 or less pieces and instantly get the correct result with best play by both sides (even if it takes many moves.)
It’s interesting whether you think this tablebase is ‘playing’ chess - personally I don’t.
However it is jolly useful for analysing endgames!
It is very unlikely that a complete game (say of 40 moves) would ever be repeated.
Certainly many opening have been analysed in depth (e.g. the Ruy Lopez), but after the opening comes the middle game!
(In a career of 2,000+ competitive games I have only had the same 15 opening moves occur twice. I won both games!)
No Grandmaster would need to be unorthodox to beat an inexperienced player. They would focus on the centre as usual.
It’s not a perfect analogy, but consider a world-class golfer playing an enthusiast. The professional would simply play the standard shots - and be much better at it.
When Gary Player (a retired professional golfer widely considered to be one of the greatest of all time) was challenged about a ‘lucky’ shot, he replied "It’s a strange thing - the more I practice, the luckier I get. )
IMO, one of the key abilities needed to play chess well is the ability to look ahead a number of moves. Someone with a 300 IQ does not necessarily have this ability, so I would say that, in most cases, the GM would make quick work of them.
It progressed through a combination of self-play and neural network reinforcement learning, so it started winning … and losing … from game one because it was playing against itself.
Echoing the emphatic ‘no’ on the titular question.
Looking ahead is useful, but understanding what you see out there is another thing entirely, and that skill most definitely requires experience and pattern recognition.
Since even our genius can’t get anywhere close to the end of the game through exhaustive walking up and down the move tree, they must evaluate the quality of foreseen positions from experience-based heuristics in order to prune the tree, just like any other player (or chess engine for that matter).
One very simple heuristic for position quality is just a count of material. I was assuming that the rudimentary instruction our super-genius received at least included "pawn = 1, bishop = knight = 3, rook = 5, queen = 9, king = \infty. Though that’s not actually a given in the OP, and as I understand it AlphaZero was not given that information in its training, so it’s not completely ludicrous to leave it out. If that’s the case, then our hypothetical supergenius is going to have a much harder time: They’ll probably be able to figure out on their own that a queen is worth more than a rook is worth more than a bishop is worth more than a pawn, but they won’t know, for instance, if trading two bishops for a rook is good or bad.
Yes, they could be taught that or they could figure it out from raw piece mobility rules. I assumed they were given that level of introduction. But that is not a game-determining heuristic in a grandmaster-level game. Whether a foreseen position is better or worse for our player will be determined by things more subtle than material count. Sure, they can prune away branches where material is given away for nothing, but that’s basic resource management in a board game and not really anything that will get them into a winning position (or kept out of a losing one, more relevantly).
IQ mostly measures the ability to find efficient methods to solve problems. Chess, even if you make the expert players forget known plays, is mostly testing your ability to recursively test and prune moves through a widening tree of options.
A high IQ individual might realize the above, might realize that he needs to come up with some sort of scoring system for each arrangement, maybe come up with some good methods for doing that, maybe come up with some reasonable guesstimates on the point value of each piece, and… That’s it.
It’s kind of like, maybe you could figure out a general strategy for working out pi very accurately. But that doesn’t mean that you could spit out the 87th digit without sitting down and doing the work of crunching through the math. Maybe the strategy would still take five hours of long-hand calculations and double-checking.
IQ problems are problems that are specifically chosen because once you figure out the method, it’s trivial to count things up or otherwise recognize the answer.
So, knowing what sorts of things matter and what steps to take to calculate out the benefits doesn’t mean that you can necessarily arrive at a solution without any work. Professional chess players have spent a lot of time performing that sort of calculation and gaining an innate understanding of strong and weak positions, the value of pieces relative to one another, etc. Those numbers are now hardcoded into their neural system. Coming in fresh, you’re simply not going to have as accurate of a ranking system.
But even if we assume that the person has figured out a solution to grading board states in some super easy manner and yet coming up with a surprisingly accurate result, that still doesn’t have anything to do with the ability to try out every option in their mind, quickly, and then try out every option that you might make depending on the previous, and then doing that again, and again. That’s not a skill that IQ tests.
IQ tests certain things. It doesn’t test other things. Chess includes a component of intelligence that isn’t tested.
To summarize:
Smart means you know how to solve a problem, it doesn’t mean that you have spent the time solving the problem.
Recursive pruning isn’t tested by IQ and may or may not be related to g-factor.
Assuming the genius has a human brain, my guess is the Grandmaster will not only crush the genius the first game, but for the first hundred games. 100-0
My guess is that after that, they’ll trade blows with the genius learning more and improving each time. First winning 1 in 10, then 1 in 5 then 1 in 2.
After 200 games, I am guessing the genius will start winning 100% of the time, they’ve become a SuperMegaMasterSystem now. Grandmasters stand no chance.
All my estimates are within a handful of orders of magnitude of course.
It could be interesting to take people who have never played chess, give them an IQ test, and have them play chess at various levels to see if there’s a correlation between IQ and spontaneous chess ability. Perhaps, from a statistical standpoint, they could take someone’s IQ and get an estimate of how well they would do in their first ever chess match. Once there was the formula built from the IQ’s up to 140, we could extrapolate the chess ranking of someone with a 300 IQ.
There’s something of a tradeoff between search depth and position assessment. On the one hand, if you could search to full depth, the only position assessment you would need would be “Is this position mate?”. On the other hand, you could also play perfectly with only a single move of look-ahead, if you had a perfect algorithm for evaluating the strength of a position. In practice, of course, neither is possible, so the chess engines whose workings people actually understand are somewhere in between: Come up with the cleverest position-evaluation routine you can, and then throw as much computer power at it as you can to look deeply and apply that evaluation. Probably that’s something like how the ones we don’t understand (humans and neural nets) work, too.
At some level of depth, just counting up material probably would be good enough. But I don’t know what depth that is.
I wouldn’t trust any extrapolation beyond a maximum of 15 points past the outer limit of data that the formula was generated from. Well, actually, one can never really trust any extrapolation, but especially not beyond that.
I think that, once computers got to the point that no human could ever beat one, most layfolks just lost interest. But the modern chess computers can absolutely blow away the computers of years past.