This post about granting sentient AI programs rights lead me to think.
What is the feasibility of creating a computer capable of imitating the functions of human thought? How difficult would it be? What processor speed would it take to match our brains? How much RAM? How much memory? Assume that the entire world got together on this, and there was an unlimited budget. Could we create a computer like HAL on 2001: A Space Oddyssey?
In another thread about this, somebody very cleverly pointed out that nature (or God, if you prefer) has already solved this problem at least once, so it can’t be actually impossible.
But to try to answer your question(s) a bit; the brain isn’t like a computer and all the processing power in the world isn’t going to help unless you have a way of using it properly. Simulating intelligence (in the strictly programmed sense), we’ll probably see some working releases of in the next ten years or so, although it seems likely that they would (appear to)be intelligent only within a limited scope (for example, a vending machine with which you can discuss the ingredients and flavours of the food items on sale, but a discussion about gardening would not be on the menu).
Whether or not we will ever create ‘truly’ intelligent machines is (literally)unknowable; we won’t even know for sure when we have done it, but I believe it’s possible.
Some of my comp. geek buddies suggested that in order to develop true intelligence, you have to have some form of sensory input (i.e. sight, smell, touch, hearing). I assume that for a computer to develop AI, we’d have to figure out how to simulate one or more of these. Opinions?
There is a thing called the Turing test, where basically you sit down at a keyboard and monitor and you try to guess if the person on the other end is really a person or if it is a computer pretending to be a person. So far no computer has ever passed itself off as a human, but the results are pretty interesting.
You might also want to look up “chatterbots.” There used to be one called Julia that you could telnet to and “talk” (type) to it. You can fool your friends for a couple of minutes thinking that they are talking to a real person, but they will eventually figure it out. Great for laughs.
True intelligence is a different matter entirely. We need to get to fake intelligence first, then we’ll work on the real intelligence. Can’t walk before you crawl.
There is a ton of info on AI on the internet. Have some fun with google.
A good place to start reading about this is Hans Moravec’s Mind Children. He’s a robotics researcher from CMU who has addressed a lot of questions in the OP.
Sight is already handled by cameras and sound by microphones. There are machines that can “smell”, such as those that can detect if a deadly chemical is in the air. Touch is already available in touchscreen monitors (as a consumer example). Teaching/programming an AI on how to respond to the nearly infinite range of stimuli it will encounter through each of those senses will be a daunting task indeed.
I work for a company that is actually involved in commercializing AI applications. We have software that provides “human-like interaction via the 'Net.” This is what’s called “weak AI” – which is a kind way of saying software that simulates intelligence (a weak AI computer could still pass the Turing test, which is why it’s discounted by a significant percentage of AI researchers). “Strong AI” is software that is truly aware, and actually meets some objective definition of intelligence.
The problem with any discussion of the feasibility of AI is that you have to have a meaningful definition, first. Some of the people who work for me (AI and cognitive science folk, mostly) have to be kept separated when these philosophical discussions arise, because of the strength of their [differing] convictions.
Much like economists, finding two AI researchers who agree entirely with each other is extraordinarily rare.
Depending on how wide a scope. Deep Blue is a very specialised AI computer that does nothing other than playing chess.
One problem to simulate human intelligence with normal computers is the human brain is massively parallel with zillions of neurons and tens of zillions of synapses.
So one either builds a machine that is architecturally similar to the human brain, or finds a way to do the same thing on the conventional von Neuman machines.
No-one in the AI field that I know of considers Deep Blue an application of AI. It certainly uses a lot of algorithms to help cut off “dead end” routes as quickly as possible (reducing the average move depth analysis of strategies that aren’t going to be useful), which helps its otherwise brute force computational approach be competitive with human opponents.
One of the managers in the Intelligent Systems Division at NASA Ames (who was also on the board of a couple AI companies) used to say: “If it’s already been done, it’s not AI.” Of course, he had a strong bias towards research.
It’s a long way from the beginning of the use of the term. In the beginning AI courses taught chess, checkers, and other games as outstanding examples of AI, as I dimly recall. I note that the emphasis switched away from them by 1990 though, in the books I still have available.
You might want to read Douglas Hofstadter’s Godel, Escher, Bach. It’s long, it’s not light reading, and it’s from 1980, but you’ll come away with a truly excellent understanding of the basics of AI, as well as the history of the subject, among a gajillion other subjects discussed in the book.
You could also read Douglas Adams’s The Hitchhiker’s Guide to the Galaxy, in which the character of Marvin shows what happens when AI becomes depressed and cranky, and Eddie shows what happens when AI becomes too cheerful for its own good.