I guess I’m trying to figure out how your cyborgs would be a better model of scientific experiments than, say, monkeys. If you’re going to leave out lots of human tissue, how do you know you’re not leaving out the relevant part?
If all you want is a body that’s roughly similar to a human being, then a monkey will do just as well as a half-machine cyborg.
And I seem to have misunderstood…are the artificial humans you postulate supposed to have brains? Are they supposed to walk around and watch TV and carry on conversations at the grocery store? Or what? If it’s just a human-sized bipedal cell culture, why make it bipedal in the first place? Why not just a cell culture?
The point of the Turing test isn’t to declare that entities that don’t pass the Turing test aren’t conscious. The point is to show that it would be perverse to insist that an entity that CAN pass the test ISN’T conscious. If we declare that an entity that can pass the test isn’t conscious then how can we be sure that Mexicans are conscious? Or women? Or anyone outside of ourselves? If we declare that an AI isn’t “really” conscious and is just simulating consciousness, how the heck can we declare that other humans are “really” conscious? What’s the difference between real consciousness and simulated consciousness?
Really, I can not tell you guys all my ideas, buy my sci-fi book in the future.
Still there are time considerations, and AFAIK monkeys are also hard to get and they also run into ethical concerns.
You really want me to give the store away? (Seriously, I’m currently working on a sci-fi tale related to this.) Oh, well, at least there will be evidence of who came up first with this.
The most likely medical application of a lifesize android will include an electronic brain interfaced with human brain tissue, This is very speculative, but IMHO we will need the artificial brain to simulate the rest by doing a real time examination and monitoring of the real thing, again it does not need to be a full size brain, surprisingly, real brains are very regular in their structure (White matter, Grey Matter)
This will basically be the same for the other organs, a small bit of liver tissue surrounded by an artificial one, instant reports comparing biological results with bionic ones.
Then as this android will have access to whole medical databases and experimental results you have not only a guinea pig, it will work also as a first medical responder.
Of course this is only limited to the medical field. In other fields you only need to figure out how the brain works and then no biological material is needed on the androids.
Some guy named Nielsen would be interested.
As I foresee, once mass production enters into the picture, a self propelled medical and testing unit that will tell doctors what is wrong and help in an emergency beats those lab cultures (Not in all culture needs, but if speed is needed or the labs are remote, yes)
Likewise, it is ridiculous to declare that the ones that pass it are conscious. That is not the purpose of the test.
Cite? We are now getting into territory that should have evidence, The Turing test does not measure consciousness.
As pointed now many times, we must go further than a simple Turing test.
We need to look for specific functions and benefits of consciousness, and there a many of them and philosophers do not all agree on all. However, one benefit that I consider is our memories, including the emotional ones.
Those real memories you can not copy, the artificial memories can be. Of course this will be the situation for the next centuries IMHO, farther into the future we can only guess.
Another use for imitation human bodies is teleoperation. With a headset & control gloves, or controlling exoskeleton, or a brain-to-machine link ( depending on how good your available technology is ) you remotely take control of the hands & head or the full body of the simulated human and act though it. All sorts of practical and social uses right there; it would almost be like teleportation if your technology is good enough. Hook in, and you are instantly halfway around the world without having to take a long plane trip. Even if it’s just a headset and gloves setup it would be useful; you’d want a human appearance on the thing so it could mirror your facial expressions as well as gestures.
Not really. Do you want a robot that can’t decide to NOT kill you if it’s programmed function tells it to? Do you want a robot that will kill and destroy if told to do so because it doesn’t care?
A robot is either programmed with safeguards not to kill you or it isn’t. If it’s programmed to kill you it will try to kill you. If it is not than it will stop.
Safeguards like that are unlikely to be reliable. A smart machine will most likely be able to work around, subvert or remove any such safeguards. A machine that doesn’t want to won’t try.
And whether or not it is deliberately programmed to kill doesn’t make that much difference; it just needs to be programmed to do something that at some point would be expedited by killing, even if the programmer never foresaw it.
It’s one thing to say “This robot is programmed to never kill humans”, it’s another to actually code that requirement. Asimov’s laws can be stated easily, but how do you implement them?
Whenever we talk about an AI being “programmed” to do this or that, we’re mostly talking nonsense.
How does a robot “want” to do anything? Machines do exactly what they are told and nothing more. They don’t have a sense of free will or self-determination. They have instructions and parameters. We don’t even know how we would begin to build AIs that could do relatively simple human tasks like just deciding on its own what it wants to do in the morning.
The sci fi notion of a robot getting struck by lightning and by magic or whatever, spontaneously become self-aware is purely a Hollywood fiction.
Only very limited machines. We already have machines that can go beyond “do what you are told and nothing else”. A machine capable of matching the performance of a human would have to be at least as flexible and creative.
Which is why this is a hypothetical about the future, and not a debate over Obama’s AI Rights Bill. Your question in the OP pretty much presumes that those problems have been solved, since the answer “we can’t build them the first place” is rather obvious. As for free will; an AI might well never develop a sense of free will because that’s just a defect. The sense of “free will” we have is nothing more than our blindness to most of our own mind. A form of ignorance.
It doesn’t need to be self aware to potentially be dangerous. As for it developing in its own direction; a sufficiently advanced machine can be expected to do that without any need for magic.