BCI could enhance teleoperation, facilitate control of powered exoskeletons, open the way to virtual reality subjectively indistinguishable from reality, perhaps even mechanically assisted telepathy – but how could it enhance human intelligence?
Bigger memory bank; better recall from that memory bank; more rapid computational power; ability to assimilate information more rapidly; stuff like that. Feed our superior organizational and pattern-recognition capabilities, for instance, with information that we often use computers to access or generate now. Graft the digital computer into the brain such that they have seamless interoperability.
Heh, the first rule of AI is when a layman states confidently that something is “trivial”, it’s going to take at least 30 years.
The problem with the technique used above is that the accuracy rate of the system falls rapidly as the number of unique things it must recognise increases. You could probably recognise 2 things with 99% accuracy and maybe 26 things with 50% accuracy but anything more is probably completely unusable. The only way to get around the accuracy/response problem is to start using external cues such as word frequency and word history. In short, you need semantic understanding. These are the exact same problems that speech recognition has been grappling with for the last 30 years without much success so I doubt that BCI will magically be able to solve them.
But I was talking about BCI, not AI. Why, in principle, would “semantic understanding” be essential to BCI? Your post includes what is intended to be an explanation of that but I don’t understand it at all. I’m thinking of BCI as being equivalent to operating the controls of a car, without the physical intermediary of your hands and feet; you don’t need “semantic understanding” for that.
All BCI is AI.
Cars were designed to be easy to operate, one control affects one parameter of the car, brains aren’t. The signal that comes out of the brain needs to be decoded and interpreted in some way. The problem is, there is no 1-to-1 mapping between a brain signal and an action, multiple brain signals can correspond to a single action and, more importantly, the same signal can sometimes correspond to multiple actions depending on context. Thus, in order to differentiate, you need to actually understand what is going on, not just treat it as an arbitrary string of symbols.
I guess one way of demonstrating it would be like this, take a look at this optical illusion. To a human, the colour of the squares A and B are clearly different. However, the RGB values of the squares are exactly the same. There is no way for a computer, just looking at raw RGB values to tell that these squares would look different to a human. The only way for it to do that is to understand that it is looking at a picture of a checkerboard with a green cylinder is casting a shadow. Now, that is a Hard Problem ™ and one that’s not likely to be solved within the next 50 years. Similarly, the BCI interface you are talking about will require similar semantic understanding and could be expected within about the same timeframe.
I have a couple naive questions about developing AI. How would it really be possible to develop a strong AI if it cannot manipulate its environment? All life, from single cells to mammals move about, have some sensory ability, have a need to consume and procreate, and are subject to dealing with the world around them in numerous ways. How would it be possible for software in a box to ever learn without following the same path? Especially without experiencing “pain” or “growth”?
Also, I’ve heard of groups of simple robots that operate individually using basic algorithms whereby the group can “co-operate” together doing simple tasks with identical goals. Would a “hive mind” which records various successful strategies be able to list them, and then somehow re-program the group by creating a tree of past useful strategies that can be drawn upon? Or better yet if the individuals could do this. Difficult concepts I’m sure. What are the main barriers to doing something like that or, is the matter of defining those barriers the $64.00 question?
I’d think the problem would be the noise. Think about when you type a paper. Do you do it all in one piece, or does your mind wander, considering possibilities until you hit on the words that should go onto the screen. Or sometimes the words pour out without any mental intervention, or so it seems.
Unless this gets solved, every document prepared with BCI will read like Ulysses.
Well, robotics is the application of AI principles in a real world enviroment and robots can manipulate their enviroment and learn from “pain”. We haven’t managed to get very far with growth of reproduction yet which could be the next big thing. However, another approach is to place AI in a simulated world. Such simulated AI can experience virtual pain and grow and reproduce at many millions of times faster than in the real world. If we make the simulation accurate enough, then it should be possible to transfer simulated AI into the real world and vice versa.
Yes, there are a number of interesting things that occur when you allow robots to interact with each other. However, programming a hive mind is very different from programming a single robot. You can’t really do anything that requires complex co-ordination or knowledge about the complete state of something.
While I realize that you included animal as a precursor argument , the fact that several groups like peta , save the forests , save the whales etc. I can think that there will be a segment of the population that will demand some sort of rights for the animatronic asimovs.
I think that there would be president here , with the synthetic actor debate. I believe that the feds specifically included synthetic actors , in regards to the manufacture of kiddie porn, so there probably would be restrictions on some of the more upscale sexual aids
This already exists to a certain point , the aegis cruisers have a really good computer setup that allows for tracking and interception of a lot of missiles, including the AI into aircraft and tanks would make for a logical progression as long as some back up is left for human intervention , right now , skynet is a really cool fiction ,that I would hate to become reality.
Combat droids on the other hand , are probably better off in the realm of fiction, after reading mucho science fiction , the bipedal design is concidered the worst , with the spider design being the most optimal, if you can think of some sort of military application that making these robots desirable, that can’t be done right now with normal infantry, then I could see changing my mind.
Fine for limited applications like mine field clearance/ hazmat applications , which are most likely going to be driven by the civillian market , not the military one. But these are not the droids your looking for.
Probably not workplace safe
This is your basic synthetic sexual aid ,but can probably be used for torture purposes , if the AI has some sort of pain feedback that would mimic a human, you will note the price being seven thousand dollars american , to add an endo skeleton and neural network that would mimic humanitys would probably increase the price to a level thats cost prohibitive , you may as well get a human for a fraction of the price.
As well , the kinda person that would do that sort of thing , probably would not be sated by a synthetic session , so its kinda hard to see this as a big seller.
Note to mods , not sure if this contravenes any commercial product advertisement , please advise if against regs.
Declan
I’m not saying that some people won’t support rights for AIs; I’m claiming that the vast majority probably won’t.
That just means they won’t be made child shaped, at best. Besides, things like this will be the toys of the upper class for quite a while I expect; as a rule, the law doesn’t apply to them.
Me neither. Robot tanks, planes yes; android soldiers no. One exception would be Terminator-style infilteration units.
That’s the advantage of an AI. A mere mimic program doesn’t actually feel pain or suffering; an AI could be designed to. Humans might be cheaper, but they’re riskier. Besides, with enough sentient robot slaves, manufacturing costs for just about everything should go way down.
The president is an android sex bot? Well, that would explain some things ;).
I am not gonna comment on the choice of the mans cigars ,lol
Declan