What's closer, BCI or AI?

There have been many advances as of late in the fields of Artificial Intelligence and Brain Computer Interface. Assuming that many of you are acquainted with some of the complexities that challenge these technologies, what do you think we’re closer to accomplishing, BCI or AI?

I think it’s BCI. The battle is nearly half won with BCI. The utilization of invasive and non-invasive EEG to control GUI’s on a standard computer (upload) are available…sort of. They’re still in a kind of “beta” phase and definitely need some refinement, but they work. The hard part is getting the information from the computer to the brain (downloading), but to the advantage of BCI advancement, we can take baby steps to get there. Currently, the only way to “download” is via sight or sound, both of which are pretty good options to use while a more direct method is refined for use. Meanwhile, the whole of the technology can enjoy financing through consumerism, while providing users with some cool GUI options.

What do you all think?

I don’t know much about the technical details, but it appears that scientists at present know far more about how the brain sends/receives data than about how it is possible for a collection of neurons to form an intelligent, self-aware mind. So I would say BCI is a lot closer than AI. How can we make an artificial intelligence until we really understand natural intelligence? Also, as you point out, very limited BCI already has been achieved.

BTW, you might want to review this thread: http://boards.straightdope.com/sdmb/showthread.php?t=325445

BTW, are you talking about “strong AI” (aka “general artificial intelligence”) or “weak AI”? See http://en.wikipedia.org/wiki/Strong_AI, http://en.wikipedia.org/wiki/Artificial_intelligence.

I’d go with BCI also. There doesn’t seem to have been that many fundamental advances in AI since I took it over 30 years ago not explainable by slightly better algorithms and way better hardware.

My personal bet: the first machine intelligence won’t be programmed, but will run on a very good simulation of the brain and associated sensory and nervous systems. That should be feasible in not that many more generations (and doesn’t have to run in real time) and doesn’t require the sort of new understanding that real AI would.

Generations of computer technology, or generations of humans?

BCI seems to be much more limited in what is required for it to “exist,” so I would go with that.

Strong AI will not happen anytime soon.

I’m probably rather uniquely qualified since I’m currently sitting 3 cubicles away from a group of people working on some state-of-the-art AI while the guy sitting behind me is working on state-of-the-art BCI.

Anyway, it depends vastly on where you define the end point, both BCI and AI are huge fields and progress in both of them is incremental.

BCI:

We’re making great progress right now in the basic sensor technology as well as focusing a lot of the work on signal processing. Right now, it looks as if we need a lot more than the hundred sensors that current systems use. If we can figure out a way to build 10,000 - 100,000 sensor interfaces, then we might be able to do more, nobody knows until we do it.

On the software, side, we’re doing a lot more to intepret the sensors as well, trying to use complex signal processing to get better accuracy.

But virtually no work has been done on the intepretation side, right now, were really picking the low hanging fruit. It’s fairly easy to figure out how to interpret arm movements say, since there is usually a fairly decent correlation between brain waves and action. But if you want to do more sophisticated stuff, like say figure out if a person is cold or try to make a brain->speech interface, then we face significant difficulties at the intepretation level.

I predict that if your after a device that can move a mouse cursor when you think about it and be able to write text if you manually spell out each letter, then it’s going to be 7 years for a prototype and 15 years for widespread adoption. If you want something that lets you effortlessly interact with a computer and mentally dictate entire essays, then your looking at 50 years to never.

AI:

Right now, apart from possibly the Cyc project, I don’t even know if anyone is seriously contemplating “strong” AI. Much of the field revolves around small, specific and practical problems which have direct applications. My research at the moment, for example, involves figuring out how to find hand shapes in a video. In a sense, we’ve picked over all the easy problems in AI and they’re so common now that most people wouldn’t think they were AI. Progress is incremental (although amazingly fast) and not likely to lead to thinking machines in the near future. There are a couple of research topics worth keeping an eye on as they probably have the most direct relevance to “strong AI” but we’re still working at a very primitive level there.

My predictions:

Expect to see a robot that can fully navigate a semi-controlled office space in 10 years time.
Expect to see “good enough to use” speech recognition in 5-15 years time.
Expect to see a robot soccer team beat the world cup champions in 2049.
Expect strong AI in 80 years - never.

Oh, and my personal thoughts about BCI is that it’s both practical and amazing. Unfortunately, the amazing parts aren’t practical and the practical parts aren’t really amazing.

Oh, and my personal thoughts about BCI is that it’s both practical and amazing. Unfortunately, the amazing parts aren’t practical and the practical parts aren’t really amazing.

It is my opinion that if strong AI ever exists, it is probably centuries away (but I’m not comfortable writing that! It sounds too much like too many ridiculous anti-predictions of the past). But I tend to doubt it will ever exist, only because I can’t really see its usefulness, and thus I don’t think anyone will bother to build one. Why not? Because I tend to believe that in order to manufacture a human-level intelligence, it will be unavoidable that it will be nearly as error-prone as humans. And a fast but error-prone device simply is not very useful in my opinion.

Don’t get me wrong: I strongly believe that devices along that path will be built someday, but merely as special (and specific, thus incomplete) simulators to help us understand aspects of our own brains and consciousness.

As for BCI, did you see this month’s issue of Scientific American? It featured an article titled The Forgotten Era of Brain-Control Chips, focusing on the ideas and career of Jose Delgado, the pioneering but controversial neurologist who inserted so-called (duh) brain-control chips in the 70’s into both animal and human subjects. So BCI already exists (albiet in a crude fashion).

Machine. Are there any others? :slight_smile: Specifiically I mean process technology.
I’ve done work on distributed simulation years ago, when we have some very early Ethernet (or AT&
T’s version of Ethernet) networks. As we get to hundred of processors on a die, which is inevitable, we will be able to build very big and cheap simulation engines. I think that is a much simpler problem than AI.

Shalmanese, what do you consider good enough to use speech recognition. We’ve got it in limited domains already. Do you mean total semantic understanding? That’s a big order, as you’ll know if you ever tried to dictate technical information to someone without a technical background.

I agree that we won’t ever be able to dictate a paper mentally. Think of the editing we do between our brains and our fingers when we type. Something that magically mapped my thoughts to words would produce an awful jumble, even if the fidelity were perfect. Still, is the gap in BCI technical (not having enough signal processing power and enough sensors) or theoretical - not being to interpret the signals if you could capture them? I think the gap in AI is theoretical, which is why I doubt we’ll see it any time soon.

I expect someone will do it because they can. Others will do it because they want slaves; most likely, an A.I. won’t be considered human or even animal, and ( at least at first ) will have no rights whatsoever.

A few uses for A.I., both moral and immoral :
In/controlling android bodies, they can be sex slaves
Long range/intersteller probes.
Faster than human researchers.
War robots.
Torture victims with no legal protections, for sadists.
Top subordinates for a dictator, with loyalty built in.
Settlers/developers for hostile environments.
Successors for humans, if we die off.
Overseer for artificial womb based interstellar colonization. Don’t haul heavy, short lived humans along. Send an A.I. probe with genetic data and the equipment to produce and raise people upon landing.
Superhumanly intelligent A.I., to think what we can’t, or replace us.

In the field, “good enough” is around 99.5% accuracy with a vocabulary of over 10,000 words in an enviroment which is not excessively noisy (ie: average office enviroment).

No, I disagree. Right now, all the problems with BCI are technical because it’s still a very young field. However, we know enough about psychology already to know that it’s not going to be easy at all. Human brain signals are already so pre-processed that what we get out at the end is going to require significant ingenuity to process. Right now, the stuff that’s promised “within the next 10 years” isn’t really all that exciting in terms of new interfaces. We could essentially do all of that stuff now with the appropriate sensor technology.

Dammit, where’d my detailed response to Der Trihs go?? I know I saw it in the thread after posting!

In 2001, a British paraplegic named Cathal O’Philbin, through a skullcap connected to a personal computer, was able, through thought alone, to make the words “Arsenal Football Club” appear on the screen. http://www.newamerica.net/index.cfm?pg=article&DocID=432 Is the distance between three words and a whole paper really that great?

Guh, typical media glurge.

Digging in a bit more, the experiement was done by The JRCEC and the actual “Arsenal football club” demo was done by a person manually spelling out, letter by letter, the words on a virtual keyboard that used partitioning. The average time spent was 22 seconds per letter and, furthermore, the technology behind it is not that impressive. Basically, all the BCI interface had to distinguish between would be 2 signals. The computer basically asks “Is this letter on the left side of the keyboard or the right?” Once a choice is made, the process repeats on the half selected.

So really, there is no semantic understanding of any brain signals, rather, all the application is doing is assigning meaning to previously existing signals and asking the user to train to the system. This is pretty much akin to researchers in the 50’s seeing that they had a computer that could read in text from punchcards and print out text, therefore, a system that could read in the Library of Congress and understand it must be very close.

I am not an expert, but it seems to me that the truly amazing or useful part of BCI would be being able to store and retrieve information without say, typing on a keyboard, or reading it off a screen. To do this, we would have to* really *understand how the brain stores data, on the level of its hardware. Once we understand this, it seems that we would be pretty much at the level where we could generate a strong AI, because the hadware running our strong AI in our brain seems to be the same hardware doing the storing and fetching.

Again, I am not an expert, and I may be oversimplifying both problems too much, but it seems to me that we will not have a useful one of either without the ability to make the other.

Good BCI would likely produce strong AI, but not necessarily the other way around. For example, one suggested method for creating an AI is to evolve one, using a digital version of evolution. If that is done successfully, it won’t helps us with BCI. For that matter, we would likely end up with an AI we down’t understand any better than we understand ourselves.

So what? That “training to the system” sounds like a simple, even trivial challenge, compared with, say, learning to write or to touch-type. And a trained typist can type much faster than he/she can speak (which is one reason why I doubt speech-recognition software will ever replace the keyboard). With a bit more development, it should be possible for a person to learn to mentally input words or other data into a computer even faster than a skilled secretary can type.

As we can’t even properly define or describe intelligence, especially intelligence on the order of human capability, I’d say it’s BCI by default.

The real result is that BCI will create artificially-enhanced human intelligence, and “hard AI” might ultimately be beside the point. Why reinvent the wheel, when you can just make what you’ve got so much better? Eventually, human intelligence may just evolve out of the brain via this synergy, but that’s a long way off. Machine intelligence can just be grafted on as needed until then.