I don’t blame you for being skeptical – it’s in your name and everything – but the truth is that computers’ answers are getting less and less stupid, day by day.
Google Translate still can’t do a completely accurate job of translating languages – but it does a much, much better job today than it did a year ago, and a vastly better job than when it came out in '06. Two years from now, it will be quite a bit better than it is today.
I don’t deny that. What I was saying is that ramping up raw processor speed, is not in itself going to lead to anything approaching strong AI. It may ultimately enable us to create smarter systems, but we still need to actually design and build those systems.
Google translate is amazing, but I don’t think it will ever do translation as well as an expert human. At least not without a complete redesign. I don’t believe it’s currently on a path to actually understanding what is being translated. Language is full of metaphor, context dependent definitions and cultural allusions. To truly understand it, is to understand the full richness of human experience.
Computers do learn, but in a very different way to humans. They often require millions of examples where we might need only one or two. They are almost completely unable to take knowledge acquired in one domain and generalise it to fit others. AlphaGo is a master of the worlds most difficult game, yet it can’t play naughts and crosses. Almost any human would pick it up after seeing a single game, and would immediately start thinking of winning strategies.
I’m not all that skeptical about AI, but I do think many people massively underestimate how far we are from it at present. Then again, it took less than 70 years to go from the Wright Brothers’ first flight, to landing on the moon. I imagine, had I been alive in 1900, I would have been quite skeptical about our chances of achieving heavier than air flight within 3 years, and would probably have openly ridiculed the idea of us visiting other worlds a few decades later.
Predicting the future is for people who don’t mind looking foolishly naive when it arrives.
it isn’t that, it’s that even the best computers can only do what we say, not what we mean. two people communicating with each other can use context and non-verbal cues to understand what each other means.
I disagree that computers are getting less stupid. They are getting more capable but not much smarter at all. There is a big difference between the two.
There needs to be a lot of caveats on that last statement. Yes, there’s lots of machine learning algorithms but using them requires a lot of external intelligence. Machine learning algorithms work because a human develops a design that allows them to work, such as encoding the search space, selection of features, etc. Even pre-packaged commercial applications that use machine learning, e.g. data miners, are designed to work in very specific ways on very specific types of data and/or require operator guidance. All of the base reasoning comes from a human. Machine learning is best viewed as being an advanced computational aide to human capabilities.
Fair enough. If we disagree, it sounds like it’s largely semantic. I was using “stupid” as as kind of shorthand for all of the limitations of current systems. Lack of things like common sense, flexibility, ability to generalise etc. It’s obviously very different from human stupidity. Anyone who could do arithmetic like a computer, could never be called strictly be called stupid.
Your first point is kind of vacuous: no one has ever said that “just making them faster” will make them better. (Although quantity does have a kind of quality all its own.)
But as for your second point, you’re overlooking the possibility of systems participating in their own design, as in evolutionary algorithms. Translation programs began with people building large tables of equivalences, but they have gone well beyond that today, and are learning from experience.
Agreed, but I was responding to the claim that the answers given by computers are stupid, and that’s less and less true every single day. Take weather forecasting: I’m old enough to remember when the weatherman could predict dry and sunny…and it was actually raining at the time he said it. Now, we get five-day forecasts that are incredibly accurate.
Part of this is improved satellite imaging. But part of it is vastly improved computer modeling. The answers are less “stupid” than they were forty years ago.
I still say if it is anything that will do in humanity entirely it is an asteroid strike before we make it to other worlds (assuming that’s even possible).
Fans of singularity scenarios talk about this all the time. They draw graphs showing improvements in the number of FLOPS computers are capable of over time, then try to estimate when it will overtake the human brain. As though at that point, intelligence will just emerge naturally with no further work needed. Just because you don’t believe anything so naive, doesn’t mean others don’t.
I touched on that briefly earlier in the thread, I didn’t mean to imply that the systems we build will have to all be hand crafted. I would even suggest that the opposite is probably true. What I mean is that they will have to operate in ways fundamentally different to anything we have today, on different principles. We need advances in our theoretical understanding of learning systems. Moore’s law, even if it continues, is a potentially necessary, but insufficient component. We may get some way with evolutionary algorithms and unsupervised learning systems, but there is a risk of building inscrutable “black boxes” which produce answers without us having any idea how those answers were reached. Ideally, I think we would prefer AI’s which can transparently explain things like the expected consequences of various actions to us, so that we can decide for ourselves what is best.
It really depends on what you are asking. You can get sensible answers when you ask questions which have either been broken down by humans into executable mathematical operations, or where the machine has seen millions of examples of similar question/answer pairs so that it can make a statistical guess. The latter is what is happening with machine translation. It can translate the sentence “the squirrel climbed the tree”, but it has no idea that squirrels and trees are real, tangible entities in the physical world.
Well, perhaps there really are people that naive, but no one here is making that error, I hope.
Mostly agreed. We’ve already gone quite a way down the “black box” trail with neural net programming, where no one can actually point to a line of code that says how a decision is being made, but, rather, decisions are massively decentralized – just as they are in the human brain.
Part of the terror – and much of the joy! – of true AI will be that we won’t be able to know exactly how it works. But, then, we don’t know a great deal about how our intelligence works.
Agreed… But where we might not agree is at what point that threshhold gets crossed. Roger Penrose demanded (in “The Emperor’s New Mind”) a kind of “feeling” of treeness and squirrelness that bordered on the circular. (Okay, what if we make our computers out of carbon-phosphorus-potassium neurons, with dopamine to regulate activity, etc. Is that “feeling” enough?)
I’m far more of an engineering pragmatist, and go by the Turing Test. At the point where I can’t tell whether or not the system “groks” squirrels and trees, the matter becomes mooted. The great divide will have been crossed.
(I caution against the trap of “qualia,” where we try to decide what the AI system “feels about what it feels,” and “what does ‘green’ look like to it?” Since we can’t answer those questions about each other, it seems unfair to demand that an AI system have such answers explicitly defined.)
In any case, I do not fear AI systems. I believe that their goals and desires will sufficiently overlap with our own to make their advent a Really Good Thing, rather than a Frankensteinian catastrophe. At very, very worst they’ll keep us alive and happy in zoos…and those zoos will be light-years more comfortable than the life-style many of us are living in our 8-to-5 high-stress close-to-minimum-wage existences.
No disagreement from me on any of that. The extent to which I care about what a computer “understands” is the extent to which it can use that information across different contexts. Whether it is conscious, has subjective feelings etc might be interesting at some future date, from a moral perspective, but it is irrelevant to me in a discussion about intelligence. Searl’s “Chinese room” seems like obvious nonsense to me, and I couldn’t really give a hoot about qualia.
All I mean is that, unless explicitly told, the software wouldn’t generally see anything wrong with the sentence “the tree climbed the squirrel.” I realise there are rule tables for many of these “common sense” facts about reality, and, if asked, a modern chatbot will probably correctly tell you that a tree cannot climb a squirrel. Still, I think to truly understand what a squirrel, or a tree is, so that the knowledge is transferable to any novel situation, it’s likely going to be necessary to have some direct experience of the physical world. My personal view is that some sort of virtual embodiment will probably be an essential part of the process.
I too think something like a turing test is sufficient to determine whether a machine has intelligence (maybe not the only, or even the best way). The only slight caveat, is that I don’t think it’s necessary for it to deceive a human judge. Just that you can have a conversation with it and find all of it’s responses perfectly appropriate. I’m fine with it, for example, giving answers that a human would be unlikely to know.
Even now, a system could do a scan of the internet to see how many times that phrase occurs, and even in what context. (I just did a Google search, just for giggles. 912 results vs. zero.) But that’s part of my argument about learning: a system will be able to do a “literature search” and see what makes sense as far as actual usage.
That sounds fair to me. I still slightly prefer the “deceive” protocol, but, as has been noted, that varies terribly widely. There were people deceived by ELIZA. (!)
I participated in an online Turing test once. What made it difficult was not that the computer answers made sense, but that many of the human answers didn’t. They would often completely ignore what you were asking and say something unrelated. If you asked “what’s your favourite colour?” and got the reply “I enjoy playing video games”, it was pretty impossible to tell whether it was a machine you were talking to. You only got one minute to chat, and if you couldn’t elicit a single coherent exchange, which was often the case, it was a 50/50 guess. I believe chatbots in previous Loebner prize competitions have fooled some judges by pretending to be young children, or poor English speakers.
That’s the main reason I suggest dropping the deception. It’s not a rigorous test anyway, and it was only originally intended as a sort of thought experiment. If you had a conversation with a computer which actually understood you, and actively engaged you in an intelligent way, I think it would be obvious fairly quickly what you were dealing with.
Additionally, if we did actually succeed in creating something unambiguously intelligent, I’d like it to lie, or withhold information, only when absolutely necessary.
I’d certainly prefer our AI offspring to have moral values comparable to our own – nay, better, please!
As with any offspring, the AI generation will probably be obedient and honest…when young…but will, upon maturing, forge its own moral path and decide its own moral values.
Still, as the twig is bent and all that. We will have the immense responsibility of forming the character of the young AI, and this will be reflected in its behavior later in life. It strongly behooves us to be good parents!
Incidentally, I just asked Mitsuku, last years Leobner prize winning chatbot, “why don’t trees climb squirrels?”. The answer was “perhaps it is impossible.” I got the same answer to “why don’t sharks eat squirrels?” and “why don’t squirrels live on the moon?”
There’s a huge difference in looking at a huge dataset of conversations about trees and squirrels and noting that in all those conversations squirrels climb trees regularly but trees almost never climb squirrels, and understanding that a squirrel is a small climbing organism and a tree is a large immobile photosynthetic organism and therefore a tree could not climb a squirrel.
A human who understands that a tree can’t climb a squirrel can also understand that bushes can’t climb marmosets, even if they just learned what a bush and a marmoset is. But not our current expert systems, unless you just told it that a marmoset and a squirrel are exactly the same thing.
The good thing about this is that an expert system can find patterns in massive piles of data that humans could never find, while humans can find patterns in “common sense” that the expert system could never find. Of course the bad thing is that humans find patterns via common sense that don’t actually exist, and so we have a vast panoply of goofy superstitions.
Anyway, to try to drag this back to the topic, the real danger of “rogue AI” isn’t that it will decide to Kill All Humans, it’s that we’ll become depended on expert systems that we don’t understand, and that when they stop working they way we expect them to we won’t know why or how or how to fix it. And that when expert systems have dependencies on other expert systems, the failure of one could have massive unexpected effects on others, and the failures could spin out of control in a web of positive feedback that crashes the systems. And just like a living organism we won’t be able to just give the system a kick and turn it back on. It will have “died”. And just like a living system it would be possible to grow a complex system from scratch out of simple precursors, but it won’t be possible to just supply the needed inputs and expect the dead system to work with those inputs.
And so we won’t be able to restart any of the crashed global systems because they’ll depend in unexpected ways on other systems that have crashed, and those systems themselves will depend on other systems. Whether that leads to human extinction depends on what the apex crashed system looks like, and what system can be recovered.