What is the point of creating an artificial human?

I disagree, I don’t think such a future is unlikely at all. I’d say the odds are about even on the possibility.

What is the basis for the assumption, though? Sure, the prototypes will be expensive, but what’s inherent in humanity that would make it expensive to reproduce? After all, as has already been pointed out, the real thing is trivial, and cheap, to reproduce. So why not the copies?

Only the first one, surely.

It would be at first, but depreciation and price deflation are big factors in information technology. Cell phones used to cost thousands of dollars, now they are about $30. According to Ray Kurzweil, the price deflation is almost 50% per year with electronics and various technologies.

So robots that can perform the jobs of humans may hit the scene in 2030 as luxury and novelty toys of the wealthy, but by 2045 they will be everywhere, very reliable and low cost. Same with laptops, plasma TVs and cell phones.

Except that is why cloning will not stop regardless of laws. It is a challenge and they will attack it until it is solved. Then we worry about the potential trouble.
The right can pass laws until their bibles combust, but it will not matter.

Such a machine could be set to “man” the factories that produce it, and the mines for the raw materials and so forth. The whole complex would be a Von Neumann replicator and the cost would drop nearly to nothing.

The purpose in building replicants is simply to interact with real humans for what ever purpose. Be it intimate or a nursebot taking care of an alsheimers patient that may be freaked out by R2D2. In a future society that has delegated mechanical and hygenic roles to robots is going to be heavily skewed to your purpose built model.

Where those droids fail will be in human interaction, and unlike anikin and luke skywalker, we are not gonna have much empathy for non bipedal designs and will probably not respond to their corparate canned empathy sub routines.

Zuki, the fembot with the big anime eyes on the other hand, I can see lonely people that dont do well in human interaction right now , getting into it , specially if the heuristic learning routines of the robots is just as advanced.

Declan

Just make sure you incorporate a Belief Chip into any Toaster that is produced, a unshakable programmed belief in Silicon Heaven may help prevent them from getting “stroppy”

remember how well it worked for the Divadroid Hudzen 10?

H10; See you in Silicon Heaven (prepares to punch Kryters)
K; It doesn’t exist…
H10; What doesn’t exist?
K; Silicon Heaven, there’s no such place
H10;"No such place as Silicon Heaven?
Holly; That’s right, the whole thing’s a big con
H10;“No such place as Silicon Heaven?
K; No!
H10; but where do all the calculators go?”
K; They just die!
H10; Calculators just die? no such pl… need…to…think…<powers down>

DivaDroid rep** on Hudzen’s chest monitor…
A metaphysical dichotomy has forced this unit to overload and shut down, DivaDroid International would like to apologize for any inconvenience this may cause, a credit note will be forwarded to your company, immediately…

R; What happened?
K; He’s an android, his brain couldn’t handle the concept of there being no Silicon Heaven
L; So how come yours can?
K; Well, I knew something he didn’t…
L; What?
K; I knew I was lying! no Silicon Heaven, preposterous!, where would all the calculators go?

**Actor Robert Lewellyn out of Kryten make-up

Researchers talk a lot about the “Uncanny Valley” effect. People tend to develop a greater affinity for robots the more humanoid they look. Up to a point. As a robot gets close to but not quite humanoid looking, it starts to freak people out. Think about movies with CGI humans like Beowulf, The Polar Express or Shrek. They freak me because they just don’t look or move quite right.

The problem I see with “biological robots” (ie Replicants from Blade Runner, Cylons from Battlestar Galactica, etc) or “insta-clones” (ie clone troopers from Star Wars or the characters in The Island) is that they aren’t robots. For all intents and purposes they are just people. IOW, unlike a computer robot that will just do what you tell it, they will act like people and do their own thing. And this includes all the inconvenient biological functions that you and I have.

Actually that’s not entirely true. There are a number of articles about soldiers in Iraq and Afghanistan developing attachments to their combat robots. And those are primitive versions of Wall-E or Johnny-5 from Short Circut that don’t even have intelligence or personality.

This is just creepy.

Personally I think all this Japanese experimenting with creating glorified Disney animatronics to be a dead end. Nerds get a big hard on by this kind of stuff, but I just don’t see the consumer buying an expensive creepy humanoid robot. Why buy a humanoid maid to vacuum when I can just buy a Roomba?

But really, who knows. We tend to invent things first just to see if we can and then we find uses for them.

The whole point of robots, clones, replicants, etc. is to be able to create something that is as intelligent (if not more so) than a person but isn’t actually a person. Something that’s just a sophisticated logic machine without anything that could be called a self. This is as far as you can go before crossing a critical line- the line of actually creating an engineered human, or human-equivalent intelligence. A sentient being who/what is what you want it to be. And the whole reason why this runs into the Frankenstein myth is it’s violation of what I call the Human Conceit: that people are, well, people, not just complicated objects that you can manipulate according to the demands of your ego. Dean Koontz’s recently completed Frankenstein trilogy reflects this perfectly: Dr. Frankenstein is the ultimate control freak, and a greater monster than anything he creates.

In addition to this, there is the fear that artificially created intelligences might be reflections of our id rather than our egos- they might be what we unwittingly actually made them to be, as opposed to what we thought or intended we were making them.

Or not . . . I recall a sci-fi novel I read a few years back called Godplayers. The big bad guys were a race of machines; religious machines who were on a crusade to kill humans and other organic intelligent life because we are soulless abominations. How could something made of meat have a soul?

You saying Gamera doesn’t have a soul?

Well, he has atomic breath.

Does that help?

Well, aren’t soldiers told during basic training that their rifle is their bride, and that they will eat and sleep with it? Once you start down that road, I don’t see it far-fetched to develop feelings for a robot.

The important question is whether the robot counts as male or female, and therefore the “feelings” fall under DADT or not. :slight_smile:

Well, the reports I’ve seen mostly say that the research for human-looking robots is driven by the nursing industry, with such a large percentage of the population being old and needing help that even if nursing were an attractive job, there wouldn’t be enough people (or you couldn’t afford the wages if you import people).

So it wouldn’t be a middle-aged housewife buying a robot to clean the house, but a nursing home buying a dozen to do the nursing, and therefore needing a friendly human -like face so people can interact with it instead of being freaked out.

We also have already a lot of simulator robots in the medical profession so doctors and nurses can train CPR and lots of other measures. True, they aren’t real AI, but advanced programmed robots but they look as humanlike as possible.

I don’t know if we need more human-like clones/robots for experiments, because virtual 3 D modeling, and cell culture experiments seem to be the best way.

Nobody mentioned TNG “Measure of a Man” yet, where Starfleet tries to declare Data a property, and Whoopi Goldberg makes the argument about a whole race of slaves?

Oh, I see. So if I create an entity for the purpose of being a slave, it’s ethical to treat that entity as a slave?

Again, how is this different than a human being created for the purpose of being a slave?

Oh, another point: while humans today can make a new human in 9 months, but building a robot takes longer, once that robot is finished, it’s ready to use. With a human, you need to factor in about 20 years of feeding, clothing and educating it.

In the future, we probably won’t need a lot of mexican housemaids, because the house will be more or less self-cleaning, maybe with dumb robots like the roomba, and the fridge that orders food per the internet, windows and walls coated with paint on Lotos effect so dirt washes off with rain, remote-controlled via cell phone window shutters or ovens etc. All available today or soon.

Instead, we will need more and more highly trained and educated intelligent humans. We already see this in todays society - the jobs that untrained, unskilled high-school dropouts and illiterate immigrants could work in the 50s have disappeared or changed to a higher skill level by about 80%. In the 50s, a dockyard worker was told “go from here to there, empyting this pallet”, even if he couldn’t read or speak the language. Today, the same job requires reading and basic computer knowledge to scan in barcodes into the computer managing the whole store and assigning the work orders. For the 10-20% of barely qualified people, high school dropouts etc it’s more and more difficult each year to find a job they can do, and there will always be a bottom layer of people who can’t be trained to higher skill level.

With a robot, once you have mastered AI or advanced enough programming, instead of training a human 3 or more years for a field, you hook the robot up to the database and download the programming. Need a nurse today, a pilot tommorrow? Just switch programs. And if they are built similar to humans, instead of embedded into the ship, they can move around to different areas.

Also, you can improve a robot body with regards to radiation, toxic gases and other substances, cold, heat etc. True, we have some specialised robots today for crawling through small pipes, or clearing explosives etc.

There would still be many instances where a danger to a human might exist, and a robot would be easier to repair (I’,m assuming it’s made from metal, not grown in a vat like a clone) than a human.

As I implied, you have to demonstrate first that it **is **an entity.

That’s why it’s idiotic to design robots with “feelings” or “free will”.

People tend to develop emotional feelings for cars, cellphones, and all sorts of other mechanical devices. I think there is a natural tendency to anthropomorphize complex machinery we use frequently.

AFAIK, modeling is very good when we have to check known physical and chemical interactions.

The problem comes when new components get inside a human body. A new remedy may show as working to cure something in the model but it may affect the brain when applied, culture experiments then can be used. But all those steps IMHO add to the cost and the time to develop a remedy. Then there is the problem of interpreting and processing the data. The reason why I think a physical model/artificial human will be made is to integrate all the previously mentioned technologies in a convenient package that will get outside the lab. Excellent tools to monitor epidemics, to identify dangerous chemicals or new dangers.

One thing that it should not be missed: the artificial components of my proposed artificial human will keep it coming back for more. Useful tools are also durable and easier to fix than a human.

[evil scientist voice]
How can you kill that which has no life!

Haw haw haw!
[/evil s v]

Whoops, did that came aloud?

:wink:

IMHO Data demonstrated that he had rights. “Measure of a Man” owes a lot to the trial part of the early “Bicentenial man” by Asimov.

Wait, you’ve got an “artificial human” that is to be used as a physical model for human beings in medical and scientific experiments. And yet this artificial human is more durable and repairable than a human. Then it isn’t a good model for the human body, is it?

And as for the requirement that I demonstrate that your artificial human IS a human, well, how do you demonstrate that, say, a Mexican is a human being? Mexicans are different than us conscious human beings, they are essentially zombies with no feelings and no consciousness. Sure, they laugh and cry and bleed like human beings, but they don’t really feel those emotions, they just simulate those emotions.

How do you distinguish between a human being with consciousness and an artificial being that just simulates consciousness? I assert that any entity that is able to simulate consciousness must be conscious, because that’s what consciousness IS. An android that can pretend to conscious IS conscious. The other alternative is to assert that there’s no such thing as consciousness, and human beings aren’t conscious either. After all, how do we know that other humans aren’t just pretending to have feelings?

That’s the point of the famous “Turing Test”. If you can’t tell the difference between a human and an artificial intelligence, then how can you assert that on is conscious and the other isn’t?

It is just me? I’m getting the impression that **Lemur866 **is like a student that does not like the answer a teacher is telling him/her? It is worse when one realizes my points are mostly opinions.

Oh, well.

The artificial human I’m thinking of is not just a model, it will be more like a human size bipedal cell culture.

The bionic parts will support the whole structure, that will include sensors and tools to make experiments and get instant feedback. One does not need to replicate all the human tissue and their interactions.

:rolleyes:

If you wonder about that, then there is no hope.

:rolleyes::rolleyes:

I mentioned before that a totally mechanical being like Data would be declared a sentinent being with rights even by me.

The Turing test IMHO will only show the presence of regular or weak AI. If you think this is not so, you need to be aware that serious critics and even some funny real life examples have pointed out that a good number of humans **fail **the Turing test.

If we follow your logic we then should declare those beings as non humans.

I don’t think that you would go for that.

http://plato.stanford.edu/entries/turing-test/