He’s not even really correct about that. Neural nets can be emulated in procedural languages. Even a non-proceedural language, if it’s executed on a standard digital Von Neumann (or Harvard) architecture, is, at its lowest level , executed as, or by, procedural binary code.
Of course, there are novel hardware AI architectures that are not procedural. These are more efficient, but can still be emulated, however inefficiently, on a standard computer. And i don’t think Penrose’s argument is about efficiency.
Quantum computers deserve mention, and those can’t be emulated, at least not in any practical sense. I think Penrose is arguing that true intelligence requires quantum effects. I don’t think that this chess problem is proof of that.
And it’s not even true to say that a computer can’t look 50 moves in advance. It can’t exhaustively look that far in advance, but then, why would it? No chess computer any more tries to do that. But trim the tree enough, and it can look as far in advance as you’d like. Like Stanislaus’ cell phone did.
Put another way, his phone thought at first glance that the position favored black, but then thought about it a little more and realized it was a draw. Just like a human would have. The mechanics of the “thought a little more” might be different, but the result is the same.
It is possible to anticipate play 50+ moves, though for chess puzzles with mate in 50+ the move order will be very forced and the position often will be artificial.
Also due to endgame tables arrived at from retrograde analysis computers can and do play many endgames arising from normal play perfectly where the forced mate is more than 50 moves off.
These days even a chess engine running on a bog standard computer can look 20 moves ahead in a matter of seconds for any position. For example a web-based version of Stockfish running on my laptop takes about a minute to analyse the starting position exhaustively to a depth of 20 without effecting the performance of any of the other programs I am running. Obviously for each successive depth though the time taken increases rapidly.
Penrose is a stubborn ass. Was a great scientist, no argument there. But he claimed in a book published around 1990 that there were machines whose stopping problem was recursively undecidable but that we humans could decide. I am others pointed out that this was absurd, but he doubled down and kept insisting on it. Of course, he gave no example and his argument had a deep flaw. In fact, it was old enough to have a name attached (Lucasz, IIRC) and had been refuted many times.
This is clearly supposed to be an example. Probably it will turn out that a human chess master will eventually realize that the board is turned sideways, or something stupid like that. Don’t take it seriously.
No, it’s not that kind of puzzle. There’s no trick to it. It’s simply a very very easy chess problem. White’s next move is obvious. Humans will solve it immediately and instinctively, but machines can only detect after a long search.
That’s one way that a computer can work, but the assertion here is that it’s the only way. “No program has been written that solves this” is a long way from “No program can be written that solves this.” Even the P =/!= NP problem hasn’t been rigorously proven yet, and that’s a much simpler case.
This is directly analogous to Searle’s Chinese Room argument, which, simply stated is also: “because I can’t think of a way to do this, no such way exists.” That’s fine for armchair philosophy, but it doesn’t stand up to any actual rigor.
[QUOTE=Peter Morris]
The basic point is that computers have no intuition. Many things that are obvious to a human are simply beyond the ability of machines to calculate. This is one example of such, and not a very good one.
[/QUOTE]
And as always, the fundamental problem with statements like that is that there’s no evidence that the human brain is anything other than a computer. We don’t understand how the mind (the program) works, but we’ve got a pretty good handle on at least the basics of the mechanism on which it runs, and there’s no magic parts there.
Penrose’s argument is essentially the old religious argument for God again: things are happening we can’t explain, therefore no explanations can possibly exist and it must be “supernatural.” In his case you can substitute “mind” for God and “vaguely quantummy thing” for “supernatural,” but it’s still the argument from ignorance, and has basically zero predictive or explanatary value.
This. His “argument” (assuming that anything that does X must necessarily employ the same physical mechanisms as known things that do X), is precisely equivalent to this notorious howler:
[QUOTE=Admiral William D. Leahy]
That is the biggest fool thing we have ever done. The atomic bomb will never go off, and I speak as an expert in explosives.
[/QUOTE]
Not only is this problem not impossible, it isn’t even that hard of a problem for a chess AI to solve. I have a toy chess AI I wrote several years ago just to play around with searching algorithms. I.e. it isn’t very good. My algorithm uses a Monte Carlo approach with exploration/exploitation factors as modifiers (again to be clear this is not a good approach, it is just a toy). I programmed in this position and what happens is very quickly the exploitation factors for moving anything except the king become very low. So even though my program doesn’t have a very deep look ahead, isn’t very sophisticated and definitely isn’t state of the art, it realizes that moving anything except the king is disaster.
Is it possible for white to win without black screwing up? That seems to be implied in the original article. If it is, could someone post how without using chess notation? Maybe something like “move the pawn from C6 to C7”, etc.? I don’t need every move, just the end game – so, moving the king to D8 might take a lot of moves, but I’m happy to take that as given.
If white can win with black screwing up, I assume black can too, right?
No. If White moves a pawn, he loses. If he advances the c6 pawn to c7 it gets taken by a bishop, and then Black’s king can’t be prevented from escaping via b7, as glee explained earlier. If White captures with the pawns on b3 or c4, that pawn gets taken and Black’s queen can escape.
Once Black’s pieces are out of that cage, it’s game over for White.
The current version doesn’t but it would be trivial to change it so that it would find it. All I would need to do is turn it into an adaptive search that searches deeper on the nodes that are highly exploitative. I would then remove the depth limit and replace it with a total nodes opened. Since pawn moves are not exploitative, it would look deeper into king moves. Non-diagonal king moves would then get pruned out quickly as well, thus causing it to search deeper. I really don’t think it would be take that long to get to 50 move look ahead for this problem. And that’s using a terrible algorithm. A good chess AI that certainly has optimizations for looking at the search space is going to have no trouble with this.
I really do have to wonder if Penrose ever even thought to try the simple expedient of plugging this into a chess computer. Yeah, yeah, I know, he’s a theorist, not an experimentalist, but that’s a really easy experiment.