I just posted my opinion of that op-ed - not very high, to say the least.
I’ve never heard of Polyani’s Paradox before, but I think the real wonder is that we can speak to our thought processes at all. I had a border collie who could abstract from and extend training. We taught him to sit at corners before we crossed the street with him - he extended it to sit in the middle of blocks to indicate he wanted to cross. He of course could not speak to this strategy.
If animals can do it, we can also, and why would we think that every thought or decision or deduction is visible to our consciousness? It’s neither a paradox nor surprising in the least.
They also seem to think neural networks are a new concept, and appear not to know that Samuel did a checker playing program in 1959 that learned from being fed positions and from playing against itself, and which beat the checkers champion, all 50 years ago.
Nitpick: Samuel’s checkers program beat a champion player? I think not. ("[Arthur Samuel] continued to work on checkers until the mid-1970s, at which point his program achieved sufficient skill to challenge a respectable amateur.")
(BTW, a checkers program fits serendipitously on a 36-bit machine! In addition to the 32 board cells, 4 cells are useful to mask out illegal off-board moves.)
At the moment, the test-taking is probably too complex, but a form of neural network known as an LSTM (it has short term memory) can do things such as image captioning (some of the captions are hilariously wrong, but a lot of them are pretty good, if boring). It can also do handwriting.
One thing I think we have to accept is that computers are always going to make much different mistakes than humans. Computers can be arbitrarily smart, but even in a task where they perform better than us when they fail they’ll likely fail in ways that seem really obviously wrong to us. However, I honestly think that if computers could judge us they’d think we make mistakes on the same tasks are obviously wrong.
For instance, since Convolutional Neural Networks work by deriving their own gradient methods (that is, determining the CONTRAST between adjacent pixels), they’re often better than humans at image recognition tasks in very dark, unsaturated pictures. They can detect things that a person rarely would.
Real world situations where the rules are not easily understood are exactly what neural nets are for. If the rules are understood then you can create an algorithm and solve the problem mechanically. It’s when the rules aren’t understood that neural nets are useful.
On edit: I think I misunderstood what you were asking. By “rules” you meant the rules of the game (Go in this case). I’m talking about the “rules” that define how to go about solving the problem (which move to make, in this case). However, neural nets are also good for games or situations with much fuzzier rules.
Jose’s response was good, but there’s a couple other things I want to touch on. It seems impressive to see AIs that can play Super Mario Bros flawlessly, but that’s actually a much easier problem than it appears because it is a complete information game, and the relative branching factor is small and easy to correct. That is, you have a small number of choices (four directions, two action buttons) and you very rarely need to significantly backtrack. By that, I mean, if one can get to a certain point in a given stage, there’s typically not a whole lot of additional optimization other than maybe going a slightly faster route or grabbing a few extra coins or whatnot.
The reason games like Chess and particularly Go are more difficult is because there’s a much larger branching factor, estimated at 35 and 250 respectively. Also, because certain states become impossible based on earlier moves, sometimes the backtracking can be significant as well. One big reason it took so much longer to “solve” Go is because of that branching factor; that’s a MASSIVE difference when searching in exponential spaces. Another reason is that games of Go typically have more moves, an average Chess game is around 40 moves, and Go is around 250. As far as I can tell, the total number of possible Chess games is somewhere in the neighborhood of 10^123. For Go on a 19x19 board, the number is somewhere in the neighborhood of 10 RAISED to power of the number of possible Chess games. In short, it’s just in a different computational state space. It’s significant, the generalized approach aside, in that it’s the last perfect knowledge game of note that Humans were better at than computers, and it was so much more complex than any other example that it took that much longer to “solve”, though I don’t think anyone reasonably knowledgeable doubted it would come some day.
As for the generalized approach, that’s really the big thing here, but it’s also a double-edged sword. It’s really all based on our guesses of how our brains might work, and we’re probably wrong about a lot of it but the results are still there. The funny thing is that some things that seem intuitively simple to us is remarkably difficult for computers and vice-versa. The difficult part, without knowing too much about the actual implementation, will be in how we can actually take advantage of this generalized approach and how difficult it is to transform a problem we want to solve into a form that this approach can address and then translate the results of that transformation back into a meaningful solution to the original problem. I’m gonna have to do some more reading up on this…
From here
We used that book, which is where I got the idea.
However some say this guy was actually no good.
And I learned that research into checkers has solved it - best play by both players will result in a draw.
AI can’t really play Mario. It’d be more accurate to say that, after a long period of training, it can solve individual Super Mario Bros levels. They generally work by individually training on each level with little real transfer in skill between level A and level B. There is (or at least was) a yearly competition where AIs would play randomly generated (but guaranteed completable and generally not Kaizo difficult) Mario World levels and no AI has been particularly, notably good at it.
But you could reduce the process of a human child learning to speak to terms like that - all they really do is keep on making noises until they eventually learn to get it right - it’s easy if you just keep on trying.
Real life isn’t that clean, but I still think that neural net learning is significant, not because it learns, but because it constructs its own way of encoding the things it learns - you could expose two identical neural net machines to identical situations and (as long as the adjustment of weighting was non-deterministic) arrive with two machines that either solve the problem in very different ways, or appear to solve it in the same way, but are utterly different inside.