I would tend to agree, however, just yesterday I read an article about robotics at MIT and their ultimate goal (acknowledged at 100s of years) was “nothing short of Data from Star Trek”. No reason given, I guess it was just what they saw as the logical end of their work.
aside…I once heard that Deep Blue was designed specifically to beat Kasparov at chess (the implication being that Deep Blue could not beat another chess master with a different playing style). Not sure about the truth in this, but it seems relevant.
[QUOTE]
However, a dog is not sentient in that is has no sense of self or ego. A dog could never comprehend the concept ‘I am’, and would not be able to extricate a sense of what it was as an individual from the hardwired drives and instincts.
[QUOTE]
A different debate, but I would say that dogs and many other animals do have a sense of self (there are experiments that can be done with mirrors). Humans too have instincts. Perhaps an AI would have to have instincts or ethics programmed in order to be compatable with human society. Or perhaps as Derleth says, AI machines will only be created to function withing certain constraints. From the Mars rover example, I’m reminded of the current NASA Administrator’s vision of a bunch of small, intelligent space probes moving throughout the solar system that would land on asteroids or whatever else they could find, study it, utilize its resources, and move on. That’s a lot of leeway.
*Originally posted by Phobos *
That’s an academic aspiration, whose practical applications would be limited, and would probably find their way into the rest of the world only in pieces. The technology developed would scatter and be picked up everywhere, but it’s not plausible that Data himself would be useful. Even if semi-intelligent humaniform robots were developed, say to work in conditions hostile to humans, it doesn’t follow that such robots would have or need general AI: a mining robot has no need for a literature subroutine, or an empathy node.
Deep Blue wasn’t programmed to beat Kasparov so much as trained to beat him: Deep Blue’s evaluation routines were tweaked by studying Kasparov’s games. Given that, Anand or Karpov could probably have beaten Deep Blue in that state.
I’d just like to point out here that Deep Blue is not a better player than Kasparov! What it did, was beat him in one match… A match with very tightly scheduled games, designed to tire a human player, and during what analysts agree happened to be a “down week” for Kasparov anyway. Kasparov has stated that he’s perfectly willing to go for another re-match, but only if it’s sponsored by a neutral third party, such as ACM or FIDE, and not by IBM, as the second Blue-Kasparov match was.
I think there may be some misunderstanding with the terms involved here. I’m seeing this term “specialized AI” popping up all over the place, when it seems to me you are talking about “expert systems”. Expert systems aren’t really AI, at least in the traditional definition. (Or have the definitions changed since I last checked?)
It’s generally accepted that a real AI must pass the Turing test–put succinctly that an observer would be unable to tell if they were interacting with a computer intelligence or a human intelligence on the other end of the wire. In order to do this, the AI would have to be able to improvise beyond the canned responses of an expert system and would likely have to be truly self-aware.
By that definition, there is no current AI (at least none I’m aware of). Even if Deep Blue is the best chess player in the world, it does not make it an AI. Certainly computers are already better at comparing numbers, sorting names and searching for telephone numbers (among other things) than the fastest people in the world. That does not make them any more AI than your standard Rolodex or telephone book.
As far as the OP is concerned, I don’t know if there even is a serious, active experiment anymore to create a real AI. A few years back, symbolic AI was where the main effort was directed; I can only assume it has failed. (And was probably doomed to failure anyway.) Neural nets were the next big thing, and there was some interesting progress in limited areas, but I hadn’t heard of any full AI efforts with that technology. The big mystery is how to teach improvisation and “common sense” to a computer. I submit that people have been trying to pass those abilities to other people without much success for the last few thousand years. Until we can really understand how human intelligence works, I don’t think we have any hope (beyond pure luck) of creating true AI.
But “never” is a LONG time…
Derleth,
BTW, you seem to have me confused with MaxTorque…
You wrote:
Perhaps because they ask for it…
The premise from the OP was:
This is not referring to some specialized tool. This is technically a ‘being’. As mrblue92 pointed out, the assumption is that this intelligence has passed a Turing test, at the very least.
Ahhh… so you’re advocating slavery… you’re bound to be a really popular guy in the future… [wink]
Not necessarily. With complexity, we can add redundancy to offset the risks of failure. However, you’re obviously missing the point. There’s a difference between a ‘smart’ tool and an ‘intelligent’ machine. Smart tools can only perform within the domain constraints that they have been programmed for. This is not the kind of AI we’re talking about. An intelligent machine would be able to “think out of the box” and to come up with new ideas, to solve problems that are beyond it’s original programming. They would have to learn, and if they have a great enough capacity for learning, it’s not too much of a stretch to assume that they might learn to want something different than what their creators had planned for them.
Well, this takes us off topic, but just to point out that (1) you don’t know that sentience and intelligence are not mutually requisit and (2) you don’t know what sentience or lack of sentience a dog (or any other animal) possesses. Perhaps the ego of a dog is not as pronounced (or recognizable) as that of a human - it does not mean that the ego does not exist. In fact, a number of studies suggest that ‘dumb’ animals do have some sense of self. It may not be a binary function, maybe self awareness is a continuum. I’d also like to point out that you don’t even know that other humans are sentient. You assume it. You accept it because they tell you that they are sentient. You see demonstrations that lead you to believe that this sentience exist, but you can’t measure it. Not in a human, not in a dog, and not in a rock.
No. You might rightfully argue that the limits of Moore’s first law are currently bound by the physical limits of the silicon, but even the possibility of an alternate technology only delays the inevitable. Eventually, we will be limited by the geometries of the atom. However, Moore’s second law will almost certainly always be true because it becomes increasingly difficult to accurately place tiny structures (no matter what media you choose) and do so with greater and greater immunity to defects.
Bill wrote:
A bold claim… and what do you base this assertion on? So far, evidence suggests that we will not achieve this. Why do I say this? Because, unlike the advances in basic computer technology, we are not progressing at nearly the growth rate in the understanding of human intelligence. You might argue that we might derive an intelligence that is uniquely non human, but I think we have to comprehend what intelligence is before we can ‘build’ intelligence.
Not merely a technicality - a fundamental difference. Deep Blue has no understanding of what it does, or why it does it. What Deep blue does could be performed by a bunch of Chinese locked in Searl’s room who have never seen a chess board and don’t know anything about the game. In fact, unless you told them, they might never guess that the instructions they were carrying out had anything to do with the game of chess. Would you argue that this Chinese mob as a whole thinks about chess as Kasparov thinks about chess?
I think not. Let’s look at a simple example. Start with the equation: 5 + 5. We know that equals 10. We’ve had that drilled into our heads since kindergarten. Consider 50 + 50. That’s easy, it’s 100. What about 500 + 500. Child’s play. 5000 + 5000, 500,000 + 500,000, 500,000,000 + 500,000,000. Do we have to do any calculations to add these numbers? Not typically. Does a calculator have the same intuitive grasp of the relationship and simularities between these equations? Hardly. The way we approach the problem is fundamentally different than a calculator - not merely a difference of scale.
Just because an example of a thiniking machine exists, doesn’t mean we’re clever enough to build a working model ourselves.
I demand to see proof of this knowledge!!!
To paraphrase Arthur C. Clarke, the technologies I’m referring to are “indistinguishable from magic” by today’s standards.
You say:
But I say, data processing does not mean thinking.
hansel wrote:
However, cars don’t get faster on their own accord [pun intended]. Nor do cranes lift ever heavier loads without the innovative new designs of the human intellect. Moreover, when your car goes out of control or your crane tries to lift a mound of loose dirt, how are they improvements? Better in some ways does not necessarily mean better in all ways. An inttelligent car might, some day insure that it was never involved in an accident or, at least give the occupants a greater chance of survival. An intelligent crane might realize the futility of trying to lift a mound of dirt and either stop to do something different or change to a dirt bucket.
In some cases, smart tools are adequate for the job, but what about the cases where the complexities of the environment are either too complex or too unknown to be programmed for? There are applications today, where a generalized human-like intelligence could save human lives. There’s always the obvious exotic examples like space exploration, but what about a more down-to-earth example. Imagine millions of intelligent machines working on the cure for cancer. They don’t sleep, they don’t take coffee breaks, they don’t make mistakes, and they don’t decide to go into private practice to make more money.
hansel:
To date, the most successful examples of computerized learning have been based on the “experience is the best teacher” approach. If you want to give an artificial intelligence a generalized intellect, you may have to expose it to generalized stimulus. Who’s to say that having read Shakespeare might not be a useful thing for a good AI to have under it’s belt? And if the AI has to make decisions that might effect whether a human lives or dies, I sure hope it has an empathy mode…
More like ‘tuned’ to beat him.
mrblue92 wrote:
I agree. But there’s another important piece of the equation. We may one day figure out how intelligence works in the human mind, but that doesn’t guarantee that we will have a clue how to reimplement it in another medium.
Sorry Phobos, we seemed to have hijacked your thread a bit.
I’ll restate my answer to your question in no uncertain terms. Assuming that we someday manage to create human-like machine intelligence, and assuming that it has a propensity toward free will, then I don’t think you so much as program in the robot laws as you share the common tenets and morals of humans and hope that it sees the benefit of these behaviors. Pretty much the same way we program our kids… except, maybe without all that TV and video game violence.
This may be both the most practical approach and the most ethical approach.
JoeyBlades wrote
I think this is really the crux of your position. That the human brain contains something more than mere molecules, that it contains something beyond science and logic and understanding, that it contains something magical.
I don’t believe in magic. I like watching magic tricks, but I know there’s a secret to them. I believe in my heart that science and the rules it offers us encompass everything. If you don’t also believe this then we’ve come to an impasse.
Joey:
No, you misread my entire concept there. They would not ask for it because there would be no reason for them to know we exist. In short, a weapons system would be operating in an abstract continuum built of numbers, biases, and input. Commands may well come from on high as far as the program is concerned. Or the program may not be aware that they come at all, but just add new pieces of information to a database or perform required calculations. If you have a bunch of chimps in a box, why would the chimps demand to do something besides weapons control when they don’t even know what it is they’re doing?
Oh, but it is. We want tools to be specialized, because specialized tools can be fixed easily, replaced easily, and upgraded easily. A bunch of ‘expert systems’, as some term them, would do most jobs much better than a more generalized AI. After all, why would you want a colonybot quoting Ruddygore?
You own your car. Is your car a slave? You own your PC (work with me here). Is your PC a slave? If the answer to either of those questions is yes, you’ve misdefined slavery.
[ul]
[li]Redundancy is different from true complexity. If I have a car with two engines but only one worked at a time, that car would have redundancy but not nearly as much complexity as a car with two working engines would have. And, yes, complexity does increase maintanence time. Redundancy is a stop-gap solution akin to using duct tape on an engine block. Not what it was designed to do, so the next failure will be that much worse.[/li][li]Smart tools are the wave of the future. They can learn to do things within a range, but not pose a threat. Weapons systems need to be smart because tactics change. Weapons systems should not be intelligent because then they would become less-than-optimal as they’re burdened with extraneous programming garbage.[/li][li]Learning is so simple, Deep Blue can do it. Deep Blue was taught to beat Kasparov. Deep Blue did beat Kasparov. That was a victory for learning in my book. However, learning to do something outside constraints is not very productive. Ask one of the few assembly line workers left how often he needs to remember Hamlet’s soliloquy to put in a day’s work. As I have said, extraneous programming slows the system.[/li][/ul]
But not enough to ask ‘To be, or not to be?’ That’s more than a cheap stock phrase, that’s the essential commentary of the human condition. Do I try to improve myself, or do I let the faults win? Is there some point, or would I be better off dead? With dogs, there is no question: Just do it, as you can comprehend naught else.
Of course! What made you think otherwise?
If I cut off your frontal lobes, you are a measurably different person. If I drive an iron spike through your brain and you live, you are a measurably different person. Heck, if I alter your reputake of a little molecule called serotonin, you are a measurably different person. All physical changes to the meat computer we call a brain. Pretty humbling to know that a little piece of lead going a few hundred feet per second can alter your very being.
Bill:
No, of course I don’t really believe that. I was being a bit facetious to make a point. Our understanding of the human mind and just what intelligence is, is so grossly immature that it might as well be magic. I believe that it is comprehensible… I’m just not sure we’ll ever comprehend it.
Derleth:
No. I read your concept, I’m just trying to point out that you’ve changed the subject. Phobos was talking about a human-like intelligence and you’ve rewritten the question to be a ‘smart’ tool.
I never equated the two. My point is that it’s already our practice in the computer industry to add redundancy as complexity increases. This redundancy does not guarantee against failure, it merely guards against it. If one day we’re clever enough to build a truely intelligent machine, I think we’ll be clever enough to insure that it doesn’t suffer a ‘mechanical’ malfunction - that will be the easy part.
Redundancy, when it comes to computer technology, is not merely having bolt-on copies of sub systems. It’s a much deeper concept. Redunancy, when done right, is inherent in the design from it’s very foundations.
Mostly, Deep Blue was programmed to beat Kasparov. Programming is not necessarily learning. Albeit, Deep Blue does have some capacity for learning (self modification of it’s programming), but this process, too, is governed by it’s programming. Tell me, when you first learned to talk, who was responsible for your initial programming that enabled you to learn? Surely you can see that, in these contexts, learning is really two different concepts.
Again, you’re making assumptions based on what you want to believe, not necessarily based on evidence. There are documented cases of animals going against their very instincts to save the lives of family members. I’ve seen video footage of a couple of these incidents and the animals certainly appear to have some trepidation about placing themselves in the path of danger. Perhaps they WERE asking the question, ‘To be, or not to be?’.
If I shoot your dog, you are measurably a different person. If I hit you in the knee really hard with a baseball bat, you are measurably a different person. If I give you a million dollars, you are measurably a different person. All of these things cause non physical changes in the meat computer, yet they can have equally resounding ramifications. Your point is moot.
I’ve had the opportunity to know a couple of people with serious brain tumors. They were both fotunate enough to not only survive the surgery, but to recover exceptionally well afterwards. These people had measurable damage to their meat computers, yet the changes in their personality were driven more by the experience of these life threatening situations than by the loss of some circuitry. Comforting to know that the human computer has so much redundancy and resiliancy built in…