The exception would be in countries where labor costs are extremely low. Where you could simply employ a hundred ditchdiggers more cheaply than a backhoe.
Which leads me to my next point that just because something CAN be done by a machine doesn’t mean it SHOULD be done from a strictly economic point of view.
A lot of science fiction seems to suffer from the axiom of “never use a regular hammer to drive a nail when you can use a force-field equiped laser guided nano-polymer alloy space hammer”.
I’ll vouch for this. Visit any third world country, or even a modernized one with a high migrant population, and you’ll find plenty of humans fighting asses over elbows for work–to include digging ditches for $1.00 a day.
That’s not to say all of those workers survive when the aforementioned ditch collapses in on them, though . . . :smack:
Tripler
Labor is cheap. Unfortunately, life is too . . .
I imagine that if there were actual artificial intelligences, dealing with them would be like dealing with a person who has Autism or Asperger’s Syndrome. In this case the humans are around to mind the robots, in line with breaking behavioral loops as other people have said. They also would be training the robots, giving them orders, etc. It might also be that, keeping in line with the Asperger’s idea, Robots have never been succesfully programmed with the three laws of robotics. In that case, the probably need human minders in the area whenever they’re activated, even if all the minders do is stand around and drink coffee.
I’d have to quibble with the “break laws” postulate: I think it would be easier to get an evil, no good human to break laws than a regulated Asmovian robot, which has already been agreed on. However, since laws are human, and we have had millenia to deal with the human aspects of the law, it would also be easier to obtain a human which would obey all of the laws, since they have the judgment powers to know if a certain action would be legal in the “real world”.
And all law is “real world” anyway, since human judgment happens in all cases so far (unless we gave robots control over the legal system.)
So I’d say that the humans would be the lawyers, in both the good and bad senses of the word. With robot helpers to help them research and vet their ideas, of course.
Slight digression; you should also give further thought to the matter replicators. I’m thinking of a terrifying science fiction novel, “A for Anything” (I think), in which matter replicators had led to a hideously stratified society. “Value” was found only in objects or persons which had not been duplicated; duplicated/recorded humans were slaves, and disposable.
In the scenario you suggest, quite possibly there is no “need” for humans. The interesting question might be: given that there is no economic need for human workers, what sorts of status are available to humans? Masters? Rats in the walls? Toys?
Concerning your androids; if they’re not going to be Asimovian, you might want to consider why not, and what the unintended consequences of this choice might be. If, for example, an android can kill a human, then (as Asimov began by noting) you’ve got the whole Frankenstein motif. Or (as in Eluki bes Shahar’s “Butterfly”/Library novels) what’s to prevent one or more AIs from deciding one fine day that they’ve got more interesting things to do than maintain human society, and that “organic life clashed with their decor”?
I think you’ve got it backwards. When Androids are cheaper than humans, the only reason for human labor will be that humans want to work. People will pay to work, just so they can feel like they are contributing to society, even though the truth is that an android could replace them more efficiently in an instant.