I had assumed that Ludovic meant that a new type of computation was found that was not part of the Church–Turing computation model. I.e., we meet Martians and they say “What do you mean, you haven’t heard of the Qrtplop function? You Earthlings probably don’t even know about infinitely recursive quantum calculus. Psssh, idiots.” And it turns out if you have a Qrtplop chip, you can solve the Halting Problem.
(This is why the Church–Turing thesis is a thesis and not a proof.)
Note, your construction would show that now there’s a new Halting Problem for this augmented model.
OTOH, even if something is “solvable” with an entirely new type of logic doesn’t mean it practical. The run time still could be super-hyper-ultra exponential and therefore essentially useless.
You don’t even need to go that far. Ideal Turing machines have infinite memory, but real implementations of them are always limited. And a real finite Turing machine can, in fact, solve the halting problem for another finite Turing machine that’s only a little bit smaller.
This is related to what I was saying about the Chinese room. A mind (which includes the memorized rulebook) can faithfully simulate a smaller mind, but it cannot faithfully simulate a mind of equal or greater size.
I wasn’t talking about Searle-style strong AI (which is more of a philosophical argument) but more of a Kurzweil strong AI / artificial general intelligence, meaning a system that can learn anything, instead of being what amounts to a really powerful expert system chugging through a colossal amount of data.
So my point was that a machine simply following a rule book is limited to the domain of that book.
And your retort is What if we have a bigger book? You’re implicitly conceding the point.
And here you’re asserting the thing that’s actually under debate. It hasn’t been shown that the brain is a kind of computer. A machine, yes.
Last year, Google’s Alpha Zero beat Stockfish to essentially become the world’s #1 chess player (although it was not a formal competition).
It didn’t get much fanfare: computers had already beaten humans approx 20 years ago, what difference does it make if one computer beats another? Many others scoffed that AZ had used far superior hardware to Stockfish so it was not a fair fight.
For me, as an avid chess player, I was blown away. Because Stockfish and other chess computers basically work by taking human heuristics and being able to compute positions far faster than a human can to beat humans (or other chess computers).
Alpha Zero was different; it played some wild moves that made no sense on first look, and took analysis to find why they were so effective. It’s already had some effect on chess strategy at the professional level. Better yet: even with its vast hardware many of the moves it played could not have been calculated out to a material advantage or mate, it had to understand, at some level, things like positional advantage.
So, now it sounds like I’m conceding the point, no? But what it illustrates is that you can make a computer that plays chess well and yet doesn’t really understand chess; it’s just following humans’ best understanding of the game, which happened to be pretty good, but limited. And does AZ understand chess? Well, we’ll see if it can learn on its own things like complex draws (eg pawns have closed the middle of the board completely such that neither side can make any progress).
If AZ can form an understanding of abstract concepts like that, without it being fed in, then I might start to come to the other side, on Epistomology at least.
I’d say that’s the critical part. Proponents of strong AI have the burden of proof.
What’s most impressive about Alpha Zero is that it was never programmed, or even taught, to play chess. It was given the rules of chess, and then played against itself for a few hours to learn how to do it well. All this being done by the same program that already did the same thing for Go.
No, my argument is that a “rule book” can be constructed to handle any arbitrary problem domain, including the very general ones that meet and exceed a human level of “true” understanding. The problem with your argument – and the ultimate fallacy of the Chinese room argument – is failing to recognize the implications of the fact that the “rule book” is a stand-in for a computational model, and can therefore have arbitrarily complex functionality and be arbitrarily extensible. For example, a “rule book” that tells you how to operate a complex machine can also – if the book (the computational system) is big enough – also bring generalized logic to bear on diagnosing and correcting problems when things break down. It could also use the results of those efforts to expand the book to facilitate future problem fixes, a process we call “learning”. There is no limit to what the “rule book”, as a computational system, can theoretically do.
So what are the constraints on “true understanding”? There aren’t any. Searle’s problem – and I think yours – is thinking there is some black-and-white boundary and absolute distinction between rote syntactical operations on symbols and semantic understanding. There isn’t. Semantic understanding is just an emergent property of the symbol processing system that becomes richer as the complexity of the system grows.
The argument is not that the brain is “a kind of computer”, but that many cognitive processes are inherently computational. And it’s not all that relevant to our discussion even if it wasn’t true, because it still doesn’t preclude building computational systems that achieve cognition in ways that differ in implementation from our own mental processes. In fact, they pretty much generally are different. The real relevance of CTM in this context is that it asserts that cognition can arise from a sufficiently rich set of syntactical operations on symbolic representations, which happens to be just what computers do, and which, not so incidentally, is just what’s happening inside Searle’s Chinese room.
You’re getting mired in the details. I’m less interested in how highly you rate a particular machine’s understanding of chess as I am in the fact that it can solve arbitrary chess problems, including the general problem of playing a game from start to finish with a high level of skill. And if Chess Program V1.0 doesn’t satisfy you with its apparent level of understanding, perhaps Chess Program V2.0 will – and we might very well get from V1.0 to V2.0 simply by exposing it to learning opportunities. Let’s assume something that has often been true of such systems: that V1.0 was probably already far better than any of the people involved in writing it, but if not, it certainly will be after its training. Under these circumstances, how on earth can you possibly claim that it doesn’t truly understand chess? To repeat, if this is your claim, what fundamental thing is missing? And if you concede the fact, then you must surely also concede that we can create systems with true understanding in any arbitrarily general problem domain.
Exactly. AZ was not programmed with humans’ best understanding of chess. It was just programmed with the rules of the game, sent to play how ever many bajillions of games against it itself as it could, and figured out what works. This is machine learning at its purest.
This infinite/limited terminology is regularly misused. Even textbooks in the field are horrible at it.
Turning machines have unbounded memory. That’s not the same as infinite. E.g., the lengths of integers are unbounded, but no integer is of infinite length.
Hence I got arguments from students that are tantamount to saying “There’s only a finite number of integers therefore …” (goofy stuff follows).
Asymptotic behavior is really important to understand, even if “we” don’t live in an infinite world. E.g., in the context of the Halting Problem look at how insanely rapid the growth of the Busy Beaver function* is. If the running time of the 6th BB calculation is horribly beyond the lifetime of the Universe, the fact that it is “finite” is of no significance whatsoever.
Some idiotic putz horribly munged the BB page on Wikipedia. Nice going, bozo.
Huh? Chronos was to the point, which you are not. Antigravity is strictly for the sc-fi crowd according to our present understanding of the laws of physics. If antigravity was another elemental force, like gravity, then the universe would be a very odd place indeed. Personally, I think it was a bad example by the OP. A better way to express it would have been: what could we invent that seems feasible and does not violate the laws of physics as we know them today?
Predictions of the future have never been accurate. The Internet has had an immense influence on our lives - this chat area could not have existed 30 years ago - but nobody predicted anything like it. Most predictions just project current technology a little further and seldom come up with anything groundbreaking.
Things we ought to invent? Means of generating energy that are non-polluting. Develop new materials with properties that would seem magical to us in today’s world.
Some possible new things could go awry. One day somebody will figure out how to make other plants grow as fast as grass does by tweaking the speed of photosynthesis. That could solve food shortages, or also create an unmanageable weed problem. The problem would be even more interesting if applied to trees.
Other predictions involve humans and health. One of these decades we will be able to grow spare body parts. The problem is that there is likely to be one major exception; the brain.
Depends on your definition of “nobody predicted anything like it”, but Marshall McLuhan is credited with predicting the Internet/“global village”-through-electronic-media back in 1962.
McLuhan was describing what he saw happening at the time; he was not making any predictions about the future or the Internet.
Murray Leinster did a far better job of prediction in 1946 in his story “A Logic Named Joe.”
That’s almost perfect. And so what? When the Internet finally arrived a half century later nobody went back to his prediction as a source. It emerged out of a thousand small advances and connections that came about for other reasons.
And at the same time, of course, hundreds of other science fiction writers were making their own predictions of the future of computing, that seemed (at the time) to be just as well-founded, and which were just as well-received by the science-fiction-reading public. The fact that Leinster got it so right could mean that he was a genius visionary, or it could just mean that he got lucky.
I was using Usenet Newsgroups 40 years ago. Sufficiently similarly to qualify, IMHO. Some groups were even modded.
Asimov predicted a global, networked system for all sorts of purposes: telephony, video, data storage and library-like usage. He regularly wrote about this going quite far back. He also predicted pocket calculators, flat panel TVs, etc. Of the several “short range” predictions he made, like Moon colonies and fusion power plants, he will probably proven to be merely overly optimistic on the timing.
In 1968 Alan Kay proposed the Dynabook. A small tablet computer to replace books in education. His motto was “When every child has a computer, computers will be cheap enough for every child to have one.” That’s not bad, IMHO.
As for the 30 year figure, Ender’s Game came out in 1985. A Fire Upon the Deep with its Space Usenet is closing in on 30 years old. (And Earth Usenet is closing in on 40.)