I am a CS major and have encountered various texts on the idea of “Turing” child machines and Artificial Intelligence. My question is - If someone were to state that they have created such a device, one capable of or with sufficient perceived capability of independent thought, would you believe that it could exist?
I ask because I am fascinated by the subject and am working towards a greater understanding of it’s practical concepts. Within a few years of graduation I hope to join one of the projects in this field or start one of my own.
No need to believe people at their word. We’ve got tests for this. I would be convinced if the machine repeatedly succeeds at a general turing test, but it won’t be THAT hard to make a believable argument/test for intelligence even if it can’t (for whatever reason).
Sure, it’s absolutely possible. Our brains themselves are nothing but natural electrochemical Turing machines. That said, no, I wouldn’t believe without strong evidence that someone today had created such a machine; we’re still orders of magnitudes of complexity from even the stupidest vertebrate brain, let alone one equivalent to ours.
Of course, you also have to set a standard for what counts as “artificial intelligence”. We don’t yet have a machine that can consistently pass a Turing test, but we do have machines, for instance, which can deliver you detailed information on a broad selection of topics, on demand. Is Google intelligent? In some ways, it’s a heck of a lot smarter than you or I.
Personally, I suspect that creating a humanistic AI will come less from programming a particularly novel data structure or set of algorithms, but rather be nurtured slowly over a few decades from something simple but huge like neural nodes. So while it will be interesting to create programmatic toys for the proto-AI to use, essentially you’re just brute forcing the solution through an evolution simulation–maybe even including rival AIs.
And again, while still cool, most “AI” research is going to be pretty irrelevant to this accomplishment. They’re working on figuring out adaptive algorithms that don’t take up all the space, but most likely have a limit to how far they can get regardless of the amount of memory they’re given. They’ll only ever be able to adapt to a single task.
I wouldn’t believe someone if they said they made a Turing-test-passing AI today.
First, I’d need to see the AI that can pass the insect test, then the cat test, then the dog test, then the chimpanzee test, then the Turing test. I don’t think a human-like AI is going to take us by surprise; we’ll all get to watch it develop.
Right now, the truly difficult questions in the field are in the province of biology rather than computer science. We know a great deal on the silicon side, and merely infinitesimal amounts on what we need to know on the carbon side.
We don’t even have beginning working definitions of intelligence, let alone of how the brain functions.
If you want to get into the field, I’d suggest expanding your scope of study in graduate school to begin examining the biological side of the problem. That will give you a big leg up in the future.
I don’t you’re using the term “Turing Machine” correctly.
A Turing Machine is an abstract, theoretical computer design that can be shown to be equivalent to any other computer design (in terms of what calculations it can perform). This does not mean that all computers are Turing machines, and certainly not that the human brain is a Turing machine. For one thing, the hypothetical Turing machine has unlimited memory.
Specifically, a Turing machine has a linear memory consisting of symbols, a state register, a table of state transitions, etc. Unless you’re arguing that there is a biochemical equivalent of these mechanisms in the human brain?
I think you are trying to say that the brain is merely a biochemical computer, and that it does not perform any operations beyond those that an (electro)mechanical computer can perform. While it is possible that the brain makes use of quantum effects, I would be inclined to agree with this opinion.
On the topic of Turing, the Turing test is actually a pretty poor way to test for actual intelligence. One can imagine a computer that is programmed to B.S. it’s way out of any questions regarding learning, abstraction, and that sort of thing.
I don’t think so. The turing test assumes that you’re specifically trying to prove that it’s unable to perform human tasks, not just having a casual conversation. A BS machine wouldn’t be able to learn a programming language and to code something up according to specification, for instance.
That’s a little too strong. Penrose, for example, has certainly claimed (The Emperor’s New Mind, etc.) that the brain is stronger than a Turing machine, and even has a suggestion about how that might be possible. I don’t agree with him (and his quantum-gravity idea is :dubious:), but there are some pretty well-known people who don’t accept the strong-AI or weak-AI positions.
The problem with trying to draw analogies between organic brains and AI systems is that they are only superficially similar: they take input, process it, and provide appropriate responses. Organic brains are massively parallel, performing many disparate functions simultaneously. A human does not consider walking to be an intensive operation, and yet it took a long time for processing and feedback systems to be able to replicate that process. But no one would ever suggest that they could perform mathematical operations at even the tiniest fraction of the speed that a home computer is capable of.
Saying that a modern AI system does not compare to the stupidest vertebrate brain is certainly true. From a certain perspective. Nearly ten thousand people a day in Japan talk over a phone to a system that my company wrote to ask questions about their bank. It understands a broad range of speech, and responds appropriately even when it doesn’t have the answer. We’ve read bloggers theorizing that the system cheats by having human operators provide the responses interactively (which would sort of undermine the whole reason for having the software-based system!). Does that pass the Turing test*? No, not even remotely. It’s not too difficult to trip the system up if you’re trying, and you notice very quickly if you stray too far outside of the knowledge domain that has been created for the agent. Even our weak AI system is a long way from the standards of being true AI by any definition that most people would use.
Not to be confused with a Turing machine, which is described by Absolute above.
Well, they did get pretty close this year: the requirement is that 30% of investigators interrogating the machine have to be fooled into thinking it was in fact a human being, and 25% were. That’s not insignificant.
Regarding the OP, I essentially share QED’s views on the brain, and so I don’t see any insurmountable obstacles on the path to true AI.