In this thread regarding the current state of artificial intelligence, a comparison was made between computers being fast and sequential and brains being slow and parallel. I put forth the argument that, although sequential processing may not be sufficient to attain AI, I don’t think that parallelism is necessary. My argument is that some time elapses during the parallel processes that comprise brain function. So long as these processes are performed within that time frame, they will be indistinguishable from those performed in parallel.
Is there an established argument for why parallelism must be the case in principle? (Note that I’m not asking about quantum effects, a la Penrose or even the viability of AI a la Searle or Fodor. Just the parallel vs. sequential argument.)
If AI is possible on a digital computer of any type, then all algorithms and heuristics used to perform the functions of a mind must be Turing computable. Anything computable using many tapes in parallel is also computable using a single tape. (I believe this was proved in one of my CS courses, I can look it up.) Thus, if any AI program is run in a parallel computing environment, it can also be run in a sequential computing environment, albeit more slowly.
If AI is not Turing computable, then everyone is barking up the wrong tree.
You do have a point, though: AI may be possible in principle on a sequential machine. That does not mean that it’s practical to pursue that course.
What I should’ve said in the other thread is that we don’t know that parallelism is not necessary for usable AI. But all of the intelligence we’ve seen so far has been massively parallel, so there’s reason to believe that it’s the way to go.
What aspect of AI are we discussing here? The ability to have an “experience like us”, or merely the ability to “have a conversation like us”?
The first is arguably impossible, given the vast and utterly mysterious influence of chemical emotion on responding to sensory input and on “encrypting” memories. The computer might be able to sort its sensory inputs into different levels of memory and find “useful” cross-files associated with it in order to develop some kind of “meaning”, thus acheiving some kind of “consciousness”, but whether this would be anything like what an adult human (or an animal, or even an insect) experiences is highly doubtful.
As for passing the imitation test, well, each sentence will have to undergo massive processing and analysis in order for each response to be composed. If your test-partner takes more than a few seconds to compose each reply, the game will be up.
In this way, I’d suggest that for each word to undergo serial comparison and analysis would require a speed far in advance of what we have now. Parallelism is, in my view, the only way to acheive the vast processing a liguistic sentence requires in order for a convincing response to be composed in a time short enough that it could have come from another human.
Well, my question was simply directed at the claim that sequential processing isn’t adequate to achieve AI. No particular aspect at all, really.
I assume you’re talking about qualia. That, or the “what is it like to be a bat?” question (Nagle). As to the former being impossible, the debate rages on. As to the latter, the more I observe humans, the less I’m convinced the phrase “human experience” has any real (that is, universally applicable) meaning. Obviously, there is a difference between human and insect. It seems to me that, as poor as the explanatory power of functionalism is, it may be the only way to actually make a judgment.
First, I believe there’s research that shows that humans parse sentences item by item as words are “exposed”. (This would be eye-tracking, where a subject’s focus of attention is used to gauge what they’re processing.) Much different than the way standard language processing on a computer is done, which creates every conceivable parse tree, assigns “meaning” to each, then chooses appropriately. However, I know there’s research being done to do the parsing in a different fashion. Just not very far along yet. And I’d make the point that there’s no reason in principle that sequential processing couldn’t do the job. Perhaps the fact of the matter is that parallelism will prove to be practically better, but I don’t think it’s necessary (in the philosophic sense).
I suppose I was speaking of qualia in a roundabout way, and indeed of paradoxes similar to the one you mention. My point was (in agreement with your overall view) that all such paradoxes arbitrarily remove some element vital to explaining the experience/illusion of consciousness and say “Ha! Explain it now, clever clogs!”. The chemical moderation I referred to which evolved over billions of years might always be missing from AI, but that should not be used as ammunition to claim that we don’t understand consciousness. (After all, we understand and can explain the weather, but I can’t engineer a tornado in the lab!)
Qualia is such an odd point…I’m torn whenever I consider it. I mean, it’s obvious to me that I experience something that I think can be referred to as qualia. On the other hand, it almost seems like a fabrication that allows anti-AI proponents to say “Ha - how do you explain this?!”
Personally, if I had to choose, I think qualia is a red-herring; just a reified description of something that comes along with sentience. But, I have to admit, there is that niggling feeling that there’s something to it…
Sure. Most CPUs, today (well, all except perhaps for cheap microcontrollers) implement some sort of parallelism, and the wave of the future is multi-core CPUs. One with 16 processors has already taped out. It will be far easier to implement an AI making use of this - one processor to do vision, one to do speech processing, one to do speech understanding, one for memory control, one for reasoning, one to do meta-reasoning, etc., etc. But being able to write the code to do this is the hard part.
This is the approach used for a project I was involved with. Two laptops and a mobile robot base that had it’s own internal computer. Vision processing and speech recognition on one laptop, speech production and some higher level “cognitive” stuff on the other laptop, sensory/actuator functions and some other “cognitive” stuff on the robot itself. All the pieces/parts worked fine alone (simulating input as if from the other parts), but when putting it all together, there were…issues.
Oh, yeah. The project I just mentioned doesn’t work at the level of parallelism that Voyager was talking about, much less what I was getting at with my original question. I felt the need to acknowledge the distinction. (Gotta love abstraction barriers. Whether or not they’ll ultimately serve to retard progress is another matter…)
Not that I’m in any way affiliated with it, but the GRACE project might be of interest to those reading this thread.
Thank you. blush After lurking for about a year after the subscriptions were instituted, I decided a new username was in order to go along with my paid subscription.
Furthermore, I’ve decided that, if asked for it’s origination, I’m going to say (possibly TMI, but fits right in with your Tom Green reference):
Oh, it’s just something I tossed off while thinking of masturbation.