I think the answer is yes, but it will involve much, much pain and will require much more interdisciplinary cooperation than I see now. As of now a good portion of AI (or AI as we’re thinking of it here, not like Googlebot AI), is actually pretty split even within itself in between the goal of using a computer to pretend its a human brain and try to provide a model for why we may do certain things (this is generally the branch people more interested in human psychology focus on), and another branch that attempts to replicate the results, regardless of whether a human brain could technically operate in such a way. But of course, there’s even less crossover with outside fields that could increase our results significantly, there are certainly neuroscientists and the like participating, but even talking with one for a few minutes can provide a lot of insight.
Of course, if you saw the work involved in making these entities you’d see why we’re somewhat hindered, it’s effectively programming every single atomic unit of rules you can think of (including what to/not to ignore), the side effect of this is that in the end its severely limited in its applicability. This hangs up everything, from single-application AIs (“play soccer” or “drive to a point in the desert”) to more complex machine learning ones. The trick, I think, is finding the more fundamental rules. One of the fascinating things about human intelligence is the ability to actually intuitively understand rules even if they have little relation to what you’ve seen before. The classic example is old Wile E. Coyote/Road Runner cartoons. After seeing one or two you can almost always predict what’s going to happen once the initial “something goes wrong” phase occurs (i.e. nothing is ever ignored, if you see it, even if Wile E. gets rid of an obstacle you WILL see it again and things WILL get worse). Even our best efforts so far really work for a limited set of scenarios, and don’t come close to that adaptability. If you could figure out a way to effectively tell it not how to figure out how Loony Tunes works, but how to figure out how to figure out (that wasn’t a typo) how Loony Tunes works you’d probably only be a couple stones throws away from the end result.
To fix this I think we need more crossover with neuroscience and biology fields than we already have, I think perhaps an extra focus on analog computing could also help, though I wouldn’t defend that last one in a debate. I’d say that with AI research right now we’re essentially fumbling around in a dark room looking for the lighswitch, we know we what we have to do, turn on the light, we just can’t find the right switch to flip, and finding it will require either a huge stroke of luck or a big breakthrough after observing an existing model enough (the human brain). Though I’m not sure which one will hit first. It’s entirely possible we could get true sapience from a typo or two (giving “it’s not a bug, it’s a feature!” A whole new meaning) or a wild guess that’s then isolated and studied, though I would place my bets on the breakthrough coming from the neuroscience side of the equation if pressed (though applying it and figuring out the correct language to make it so would still be a royal pain).
Also, most of the current Cognitive Architectures quite frankly suck, most research is done with an existing architecture and it can be a huge limitation. I’m working with Soar right now and it has some definite flaws. For example, if a set of instructions is always executed Soar can “chunk” them as a universal rule given certain stimuli to simulate “learning”, but it only works for those specific stimuli. If you add one to a number on the input even if the exact process would be the same it wouldn’t be able to tell it’s similar and would have to execute those instructions for that specific data set a few times before it chunks it. For example, if you taught it addition with 2 & 3 it wouldn’t used the chunked ruleset for 2 & 4, or even 3 & 2 (different order). You can get through some of this with going into source and hacking some probability theory stuff in there, but like I said above, it’s basically the duct tape and bubble-gum solution. As soon as you get into another scenario, or expose the machine to a scenario that won’t operate on real world rules (and as such, probably won’t have the same probability of occurring a certain way) it’ll break down, we’re basically waiting on something that allows us to tell the computer how to calculate its own probabilities and how to best network all its experiences with minimal hardcoding.
Of course, I say that now, as soon as said breakthrough comes to pass (if ever) we’ll probably just find something else that we can bang our heads against the wall figuring out. Emotion may be it, though I think if you get the rules fundamental enough the emotion will evolve naturally out of the reasoning, but that’s just a purely personal pseudo-philosophical viewpoint, not necessarily an entirely reasoned one.
Oh, and as a slight post-script I should note that I do know a little more about the cognitive development side. There are certainly sides that don’t even try to make basic rules, I’ve seen papers that basically record every possible unit of data about voice pitch, timing, etc (the one I’m thinking of specifically was from Japan (Osaka maybe?) that dealt with Aizuchi, filler words) and effectively code a very, very strict input/response tree that appears very intelligent and quite able to converse but is about as realistically smart as a premium toaster, depending on how you define “smart” at least. Granted you may get a Turing-test passing entity out of this method eventually, but I don’t think it’s a very elegant solution. I’m prepared to eat those words some day, if necessary.
Disclaimer: 1 AM, barely proofread post.