Is it possible to make a sentient robot (a la sci-fi movies)?
Not currently; I suspect this thread may end up in GD sooner or later, as AI is something on which not everybody agrees…
If we do manage to make a robot that appears sentient, we will never be able to know whether it really is sentient, or if it’s just mimicking all of the outward signs (or indeed if there is any difference)
If someday we get a very clear idea of how we are sentient, and exactly what that means, then we can start to discuss whether a machine can be made with same type of sentience.
My gut feel is that yes, someday we will be able to make a machine that can simulate every function of the brain perfectly. Not in my lifetime, though.
Read Kay Kurzweil’s “The Age Of Spiritual Machines”.
Ray. Ray Kurzweil.
When I still received Wired magazine, I read an article that addressed this issue. It said no, at least not in the forseeable future. The gist of the article was that the human brain is not just a very complicated computer, and no matter how much you increase a computer’s memory and processing speed, you aren’t going to change the fact it doesn’t think like we do. In other words, binary logic isn’t sophisticated enough to model human thought patterns. We need the next major advancement over modern microprocessor technology before true AI becomes possible.
I don’t see why not…someday. Although as already mentioned ‘sentience’ and exactly what that means is a slippery subject. Are amoebas sentient? Fleas? Earthworms? Hell, when you get down to it defining what constitutes ‘life’ is actually rather tough (by many definitions fire would qualify as alive…then you have things like viruses muddying the water). If you are sentient are you therefore alive (I’d think so but a robot misses on many definitions of life…e.g. can reproduce, eat, etc.).
Couldn’t technology like that of the Asimo robot http://world.honda.com/robot/ and the best leading A.I. be combined to create a crude sentient robot?
Sounds like the article oversimplified a little; computers work in binary ‘under the bonnet’, sure, but that’s not the level at which we would expect our AI model to run.
If something can be exhaustively described, it can be modelled.
Asimo is as sentient as a toaster. While cool it is mostly an attempt to create a humanoid robot. For sentience you don’t need the humanoid part. It could be a sentience in a box and have no mobility whatsoever. Although I wonder for true sentience if the machine would need a way to experience the world around it (cameras and such)?
I’ll just take the opportunity to blurt out my usual bit about ‘programming’ - I don’t believe AI will be programmed in the conventional sense of “If condition A then do X, if condition Y then do Z” - it seems far more likely that we will create a machine that programs itself based on a relatively small set of fundamental ‘brain unit behaviour rules’ (or similar) - the AI, when first switched on, may be no more coherent or sentient than a newborn baby.
For most applications the intelligence of insects or reptiles would be more than enough.
I think its possible AI models will reach a state that we will be unable to distinguish from sentience within the next thirty to fifty years. As others have pointed out we don’t know what sentience is so we may never really be able to say if an AI is sentient.
The idea that a computer can’t work like our brain because it is binary is completely silly, as mangetout pointed out. AI structures are not binary. There are lots of good introductory articles on the internet abou neural networks, but basically neural networks are “learning” algorithms and they are currently used in all sorts of scientific and business applications today. They are a crude model of our brain’s synaptic network…basically its a set of nodes linked together and the links can become “stronger” (that is, they are followed more frequently) through positive feedback (when this path I’ve randomly chosen results in a success criteria, I record that and I’m more likely to take this path next time, but its still random and I can try others).
Cooper is right, the hardware or even low level software shouldn’t be important.
the only problem with that is… if you can write a program for something what are the limits for what it runs on? any program you run will be broken down to some simple JMP,ADD,MOV commands eventually on a computer at some level. and if you can break a mind down to that, couldn’t you just run it on some graph paper? could you give some graph paper and a very very large team its own conciousness?
I mean, if you can run the “program” on meat, and you can run it on metal, could you run it on paper?
I’m not sure that an artificial sentient being is going to be made anytime soon. Progress in the AI field has slowed down considerably compared to the high expectations many of us had when we saw “2001: A Space Odyssey (1968)”
OTOH I’m optimistic that we may build in the near future, not sentient, but powerful problem solving computers that may permit us a technological quantum leap. With this I mean computers that can process enormous amounts of problem specific data and find patterns and associations in this data geared to solving specific problems. Imagine telling a computer: “here’s the data of materials and budget available, now, by tomorrow at 8, I’d like to see the most efficient supersopnic transport vehicle you can come up with”
Well if we can build machines that are better than humans at chess and other things, why can’t technology advance to the point where we could call a machine our equal? I’m even sure that this will happen in the next 25 years, I mean look at all that has been accomplished in the past 25 years.
I’ve always gotten the impression that while Japan leads in developing robotics after American companies passed on it, research at MIT and other US universities and think tanks are usually a step ahead at real AI and interface problems, basically ignoring the “robot” part and working on computer systems. The best research I’ve seen done is analyzing and duplicating the minds of animals, performing tasks and eventually emulating what is shown to them, and recognizing human mannerisms and speech. The results are generally unimpressive when compared to a human, but they are better than they were a few years ago.
I’ve also seen some interesting projects here at Berkeley… there was a famous one a couple of years ago making a self-balancing leg. Not related to AI as we think of it, but even AI needs to self-regulate it’s body if it has one. Emulating our bipedal walking is one of the larger challenges in robotics. Asimo is a good step in it, but it still shuffles more than anything else.
I’d doubt it. The advancements in the past 25 years have basically been making new hardware. We are still writing basically the same computer code as we were then… the biggest developments being relational databases and 32/64-bit systems, none of which directly lead to new AI developments.
The developments we are looking at in the next two decades in computing are organic and/or quantum computing, new and smaller storage medium, wireless connectivity, etc, rather than artificial intelligence. If that does happen, it will be a breakthrough by scientists in behavior research and applied maths, not computer companies. You don’t need a 100 GHz computer to do AI, you need to be able to interpret human thought. A 100 GHz processor will do it FASTER, and maybe with a better user interface, but not any BETTER.
First off, everything in the world is some what programmed. To make a sentient robot it will also need to be programmed. Make the robot know it needs something, but don’t let it know why it needs something. This will have to be the first step in making real live robots.
Agreed. Its a great read.
In it he states that by 2009 our super computers come very close to the computing power of a human brain.
By 2019 normal PC’s are the equivilant of a human brain
By 2029 a normal computer is the equivilent of 1000 human brains.
And finally by 2099 there will not be a clear distinction between computers and humans.
Sounds far fetched, until you read the book.