He was. He is basically a really nice guy.
Hidden variable is a well known label for one possible interpretatino of QM. So any other physicist would recognise where he was coming from. He had a specific and new idea about the hidden QM - he drew an analogue with Penrose Tiles. The talk filled in a lot of stuff that was not really well conveyed in his book, and made where he was coming from much clearer and coherent. Some of it he deliberately omitted because he felt it was too fringe. (It is also a long time ago, and my memory is not to be fully trusted here.)
This is very true. Well 40 years anyway.
Yes and no. Mainstream AI is expert systems, inference engines, theories of knowledge capture, ontologies. And it is hard work, and the progress has not been fast. However don’t get me started about how little mainstream computer science as a whole has progressed in the last 40 years. There are so few new ideas that the field is hopelessly moribund. The progress made in the 60’s and 70’s is so huge in comparison to what has come after to be utterly embarrassing.
There just just too big a disconnect here. No AI guys are working on consciousness, although there are guys working on a hybrid set of ideas that try to understand consciousness with a mix of neurophysiology and bits of computational theory. But there is such a gulf that saying that anyone has any idea, let alone that anyone can claim that a computational engine has failed, or cannot work, is just silly. We are at the banging the rocks together level. It isn’t even clear we will ever have the intellect to understand the answer.
I’m not sure how you can say that. 40 years ago real-time, natural language processing was just a dream - let alone a machine that could play and beat the 2 all time world Jeopardy champions. I’m guessing you don’t have a very high opinion of IBM’s endeavors with Watson.
The question is how much is simply a matter of scaling up, and how much is genuinely new insights. The progress in processing speed is uttlerly astounding. But ideas on how to use it short of brute force use of old paradigms are few. Not that there has been no progress, but when you consider the effort and time expended in the last 40 years, compared to the tiny number of people working over less than 20 years before that, it isn’t impressive.
I don’t know anything about AI, but I would suspect that most of the problem lies in the architecture. A single neuron can have hundreds to to thousands of synaptic connections with other neurons and it’s response to each of those connections is exquisitely refined. Beyond that there is an entire support network of glial cells that are far from passive and are now understood to participate in shaping the brain’s architecture and communication.
With digital systems you have transistors that turn on and off. That’s it.
So it’s not too surprising that one would need to use speed and brute force processing power in an attempt to emulate a biological system. That’s my uniformed take at least.
And a fair enough take it is. What you have to understand about sequential instruction processing is that it is pretty good for specific tasks, but I personally do not believe it is workable for serious self-aware AI.
A program is a list of steps, exactly like, for instance, that packet that tells you how to put together your child’s new chrismas toy or a recipe for hollandaise sauce. The processor read and performs each phrase of each step over and over very quickly. Works for pretty basic stuff and looks impressive when basic stuff is built into more complex stuff, but the computer cannot ever “remember” the steps.
Imagine that you had to assemble the same toy for 100 kids. You would read the instructions maybe twice and then each subsequent assembly would become easier. The computer, by comparison, has to read the same instructions over and over, every time, and cannot get any faster because it cannot “remember” anything. This I see as a massive roadblock to real AI.
We can design machines capable of operating with dynamic logic circuitry, that can capture and “remember” processes. It is hugely more efficient than sequential instruction processing, but it relies heavily on intensive parallelism – I have coded simple, unimpressive parallel programs and the experience was nightmarish, the type of minutely parallel coding and synchronization likely needed for real, practical AI makes my head hurt.
There will be a sort of tipping point in AI, where the computer becomes a partner in the development process rather than just a passive tool. Kind of like the difference between working on a patient who is lights-out vs. one who can tell you where it hurts. Once that point is reached, AI advancement will be explosive, but first we have to have the kind of hardware and designs that will get us to that point. As long as your Watson produces enough waste heat for three or four Wisconsin winter homes (WAG) while still being little more than a fancy toy, we still have quite a ways to go.
Well, we have evidence that light and sound exist independently of eyes and ears. We have no evidence that consciousness exists independently of brains. Occam’s Razor suggests that until we have some evidence of an external force using the brain as a conduit for consciousness, we should proceed on the assumption that the brain is all there is.
I don’t want to sound like I’m pimping for IBM, but if you read the article I can’t see how you can call it a toy. It’s gone far beyond the status of an expert system in its current role and there is little doubt that it will far exceed expectations.
This is a system that can assimilate unlimited quantities of data on any and every subject, however detailed, complex, tedious or mundane as well as journal articles, scholarly texts, lecture transcripts, patient charts and test results, etc. It can take all of that information, make sense of it and return intelligent recommendations based on what it’s learned - yes “learned.”
How is that in any sense a toy?
The issue is the “make sense of”. It doesn’t make sense of it. It applies a set of heuristics for pattern analysis that are then useful as targets for search patterns. The intelligence isn’t in Watson, it is in the design of the patterns and the highly optimised inference engines used to apply the search patterns. They are human crafted, and highly task specific. Watson is at the level of an idiot savant: fantastic recall, and an ability to do pattern associations. New Scientist articles have become rather painful, they are full of gloss and hype, and never talk about the reality. That article is little more than quoting an IBM flack’s press release. Sure, there is serious work being done, and valuable work. But the idea that Watson is right now delivering fantastic new insights into medicine is simply not the case. As a helpful tool it will be very useful. It is a big huge database, and one that can apply vastly better search mechanisms than conventional computer systems. But “make sense of” it does not.
Watson isn’t a toy, but on the road to any sort of real AI, it is still banging the rocks together.
Expert systems for medical use are not new, and indeed the first ever really successful expert system dates back to the 70’s. This was Mycin, and even then it could better expert doctors in treating bacterial infectious diseases. Watson is just more of the same, but bigger. Another role that expert systems have really shined and actually displaced doctors, is analysis of ECGs.
Nope I wasn’t, yet my sense of humor does seem to confuse you.
Well, I think any system that can get 50% on a test that trained doctors find difficult is doing pretty damned well. You don’t want to call that learning? OK. But then I guess the doctors didn’t really learn either.
I think of quiet a few Facebookers without a problem
It’s incredibly impressive. But it’s not a step toward true AI or self-awareness.
Despite the speed and access to a mind-boggling dataset of information in which it can narrow in on a correct answer, it doesn’t learn as efficiently as humans do—if learning is the right word to use at all. It can’t be embarrassed or feel foolish about giving a wrong answer. It doesn’t have feelings at all, or curiosity. All these things motivate us into learning and improving ourselves, because before we knew we were wrong, it felt as if we were right, in hindsight. Watson and its ilk don’t have this built in, and that’s a huge chunk of intelligence that’s not only missing, but is awfully abstract.
I just assume all the idiotic posts are posted by robots. Really dumb robots.
That was never my point. I cited the example of Watson as a counter-example to the claim that no progress in the field of AI had been made in the last 40 years. Watson invalidates that argument in spectacular fashion. First it competed against humans in a game that most humans find challenging at best and triumphed against the 2 best players in the game’s history. However this was regarded as a trivial accomplishment despite the fact that it involved the ability to actually **understand **natural language in real time. Of course to truly understand one might argue that Watson would have to be self aware. So consider that my shorthand for being able to successfully **emulate **human understanding.
Now it has gone on to rival the ability of trained physicians but this is still not considered to be anything approaching human learning or understanding. Perhaps this is again something that relates to the lack of self awareness. Regardless, it is still able to successfully emulate what passes for human learning and understanding since the end result is indistinguishable.
Btw, once computers have written a Genesis 11 someone poke me please?
But that’s just it, the end result is distinguishable—at least for now.
But even considering yet more exponential advancements on that front, it’s taking a top-down approach of “learning”. You have to supply it an enormous database of information, precatalogued so it can make sense of it. Then it rifles through all of it—in mere nanoseconds, granted—but it still has to access all of it and run a series of algorithms to narrow down what it has determined the most likely answer based on all the possible probabilities it’s been programmed to assign.
This method can only go so far. It has no way of learning from the bottom-up, which is crucial to becoming indistinguishable from human learning.
For one, this method of AI cannot form a hypothesis, in the truest sense. It can’t observe a phenomenon totally alien or mysterious to it, and reason several hypothesis to explain it, then choose what it thinks to be the most likely to proceed with formulating a testable theory.
It just can’t guess like we can guess, and never will. So in that sense, it’s doomed to be distinguishable from human intelligence and learning, despite how incredibly useful a tool it is.
No. Please read the article. Watson does this on it’s own. No humans involved. That’s the whole point of the system. Were it as you describe then yes, it would indeed be trivial.
It’s one thing to deduce the answer from a plethora of data before you, let alone in under 3 seconds, but besides being provided with an astronomical amount of knowledge in the form of multiple, searchable datasets from the start—which it can rifle through at inhuman speed—can it learn from and correct for errors that affect the way it forms hypothesis in very subtle and abstract ways across its entire architecture as humans do?
FYI - the linked to article costs $35 to read. I’ll pass.
I suppose my point is, it’s not cognitive. It cannot think on its own terms, forming its own ideas. It’s a blind sorter and matcher of symbols, that is incapable of knowing what these symbols we show it actually *mean, *by themselves. This sort of “cognitive” AI is the holy grail, and Watson is at most only one part of achieving it, and at worst, a dead end for artificial cognizance.