If we ever perfect AI and robotics we will have an ethical dilemma...

that is because they will in essence be more efficient than most humans at most jobs (depending on how good the AI and robotics are, perhaps almost all jobs). Thus, we will have a technology that could exponentially improve productivity which typically correlates with increased wealth and standards of living (just ask Alan Greenspan). On the other hand this wealth would be concentrated in the hands of those who own the robotics patentss, software, and means of production. The significant portion of the population (both skilled, unskilled and professional) that relies upon salaries or the sale of their services would be largely “excluded” from the party.

Thus, we would be in the position of having to either accept this scenario or instead prohibit it’s use with the knowledge that we were doing things less than optimally. Consider, all the "lectures’ you’ve heard about 99.9% NOT being good enough. Imagine, if we had AI/robots capable of 99.9999% accuracy (and perhaps improved actual performance) verses humans capable only of 99.999% (and that’s optimistic). Ultimately, we may have a system of affirmative action for humans! Makes me wonder what the conservatives and Libertarians (including myself) might say under those circumstances about the issue. Assuming Jesus doesn’t return, and that we are not hit by an Asteroid or some other equally devastating event such technology will occur. The only question is whether it will be fifty, two hundred, or maybe a thousand years (even a thousand years is an incredibly short period of time from a historical perspective). Does anyone doubt that within a thousand years (agains assuming the worst doesn’t occur) that we won’t have this sort of technology?

This isn’t a new problem. We don’t have perfect AI or fully human replacing robots but we’re getting there incrementally and have been ever since workplace automation came into being. People have been losing job to inreased automation for a very long time. Look up the term luddite for a good example.

Padeye’s right. Asimov seemed to think that our end would come when we engineer AI with ethics, but I’m not certain of that bit myself. :wink:

as they do even today that more advanced technology actually creates, new higher paying jobs. However, with advanced AI/robotics this will no longer be the case. If we ever get to a model which approaches Data (even a small fraction of Data or the Dr. on Voyager) we will have an automation capable of producing progeny (if we so enable it with the necessary rescources). Furthermore, the AI entity would be capable of doing virtually any other “high end” job from being a Dr. to working as a mathmatician. Thus, whereas the Luddites fought the Industrial Revolution which arguable led to a higher standard of living for most, this technology would increase wealth exponentially, but reduce most humans to a state of welfare like dependency. The only analogue that I can think of in history might be wealthy Middle Eastern Countires (think Kuwait, Qutar, UAE ect) where many things are provided for free to their citizens.

Roland, I did my best to give a factual answer but I’ve noticed that you have started a lot of GQ threads that don’t seem to want a factual answer but are better suited to Dreat Debates or IMHO.

At any rate I tried to decipher an actual question from your OP and answered it as best I could. We have already faced the dilemma in smaller degrees. That doesn’t mean we have all the correct answers if any exist.

You make a hypothetical situation that has virtually no chance of being answered. Not many cultures last a thousand years so and even if ours did I don’t think we can predict what western culture will be like then. It may not take a thousand years to make these machines, if in fact they can be made, but we won’t have perfect AI androids very soon.

You do make some assumptions that I won’t agree with. Who says these things can be given away free? Machines cost money, fuel costs money, programmers and designers have to be paid. Some oil rich countries can provide a lot of “free” services for their citizens but that lasts only as long as the supply of wealth and that is already coming to an end.

At any rate the best that we can say is that we’ll probably need a different economic model than we have now. If you fire all the human employees because robots are cheaper then there’s no one left who can afford the products.

I’m not sure whether you’re asking what will happen if it becomes possible to produce all goods and services with no human input whatsoever, or whether that scenario is possible?

Clearly capitalism would not work very well when human labour is not needed as a factor of production, as many people would be left with no way of supporting themselves. It would have to be replaced by another system of economic organisation. What you think of this depends i guess on whether you consider the argument for capitalism to be a moral and pragmatic one, or purely a pragmatic one.

There is however no way of answering the second question at the moment. Not only are we nowhere near a genuine AI yet, we don’t even know if its possible.

probably within the next one hundred years (and frankly I believe more like fifty years). I am essentially talking about a Star Trek “Data” with emotions (either simulated or real). The scenario put forth by Spielberg in AI is also what I am describing. In other words automated entitities that could do every task, profession or job better than humans for nearly twenty four hours a day at a fraction of the cost. If such beings are created we will have two choices:

  1. Create and utilize these beings and adapt our society accordingly. Thus, we will have to find an alternative method for distributing rescources since people obviously won’t be able to compete for work.

  2. Suppress the creation and utilization of what is clearly the logical, superior worker. Keep in mind this would probably have the effect of “hurting” or killing people. That is because our “superworkers” would be equivalent or better than the very best humans. Thus, every doctor could be equal to the world’s best ER doc. Every policeman could be a “crack” shot who only used their weapon when appropriate and didn’t engage in “The Shield” like behavior. Almost every car or manufactured product would be perfect. Airplane mechanics would do perfect inspections nearly everytime (thus preventing crashes that are now missed). My point is that “supressing” this technology would also create vast ethical issues. Our “AI” CEO’s wouldn’t even consider using Enron or Arthur Anderson type tactics. Put one of these creations on The Apprentice and Donald would fire everyone else after the first show. They would be attractive, friendly (except when the situation called for being otherwise), strong as ten people, have IQ’s beyond what can be measured ect. I think that this “ultimate” model will take some time to develop (a maybe 150 years), but the “basic” model that can replace lower end jobs (not currently automated) will be here easily inside of fifty years.