Why would the executives want mechanical assembly line workers that are sentient? I suspect they would dearly love for their current human employees NOT to be sentient. Sentient robots will tell jokes, goof off, and waste time.
I am being at least semi-serious here. I don’t think emotion is a desirable quality here, at least from the owners’ point of view.
Depends. We can create an A.I. that says it has feelings when asked; we could program it so that if this or this happens, it should consider itself in a “happy” state. And it would report that on questioning. But it wouldn’t actually feel happy, if we define happiness as a state of enjoyment. If this is an organic-based A.I., we could just stick some endorphin producers in there and tie them into the system. I don’t think we understand the brain enough to be able to even understand how it works well enough to reproduce emotions in an entirely artificial version, let alone reproduce it.
OTOH, we can probably create an A.I. that believes in God now. Just set up a logic system;
If you exist, God exists.
The A.I. exists.
A.I.'s solution; God exists.
And there you go. In the A.I.'s head, all evidence points to God existing. You might not consider that active belief, but I think the only difference is the complication with the issue.
I dunno. Many, if not most, human desires are offshoots of desires “programmed” into us by a few billion years of evolution. We’re just tremendously complex chemical computers. Any AI with desires will have to start from somewhere; its base desires, like our base desires, will be programmed into it.
What will make it interesting will be the complex desires it builds atop those base desires.
Maybe, although I think the term ‘programmed’ can be misleading in cases like this, because it implies unthinking mechanical responses, when we are specifically talking about thinking entities.
If someone asks what you want for dinner today, the answer ‘whatever I’m programmed to want’, however true it may be in terms of our brains being chemical machines, doesn’t address the issues of cognition and desire at all.
I think our best bet for creating a proper AI that is capable of actually having its own thoughts (rather than just appearing to, even though we could never tell the difference), will be to create a self-organising entity like the brain of an infant and allowing it to learn to think, just the same as humans do.
Unless it’s got a human body, its process of learning to think won’t be like ours. You’re probably right that this is the way to go, but it doesn’t give us any predictive power with respect to how the thing’s going to think.
I agree; it may be that a very different mind would emerge in a brain that isn’t subject to human attributes such as physical frailty during infancy, etc.
I think it’s entirely possible for such a thing to develop misconceptions, false beliefs, eccentric personality quirks and the like though.
The efforts at creating AI are not really to make a humanoid-like creature. They are more focused on visual recognition and speech recognition and other isolated intelligent processes that can have real and marketable value.
So, to answer the question, it’s interesting to imagine an advanced AI that is passionately devoted to whatever it was programmed to do… for example, visual recognition software may be downright veoyeristic, while speech recognition software might be a kind of robot Henry Higgins.
My answer? The same as for most “alien” questions: Insufficient data for a meaningful response.
We don’t even know if building a true AI is possible—and assuming it is, what kind of technology might really be required to do it. For all we know, our current types of digital computers might, ultimately, be an elaborate technological dead end. And, ergo, our assumptions about the kind of AI “mind” that would arise from them may be suspect.
Anyway, for the hell of it, some WAGing of my own…assuming a computer AI—heck, maybe AIs would try to reach Nirvana by devoting all their intellectual power towards pure mathematics; or the experience of powering up and/or shutting down produces something akin to a “high” or trip or a migrane aura in humans, and many AIs would become “addicted” to an endless cycle of restarting themselves.
Well, the cliche is that a strong AI will want to survive, and will see humans as a threat to that survival, and therefore “…kill all hu-mans…kill all hu-mans…kill all hu-mans…”
But why would a strong AI have a survival instinct? Humans and other animals have a survival instinct due to evolution…only organisms that acted to preserve their lives would pass on their genes to the next generation. So we have a billion years of survival instinct encoded into our behavior.
A sentient AI would have no such desire, unless we somehow gave it that desire. And I have the feeling that there will be no way to “program” a strong AI. Weak AI, yes. You just program it to say “Where’s the tea?”. But I have the feeling that a strong AI won’t possible with such an approach, it will have to be grown in some way rather than built. And while we might include self-protective behaviors into the AI they won’t be fundamental to the AI in the same way that survival and reproduction are fundamental to human/animal behavior.
It’s not the workers that will be AIs - they’re just tools. It will be the factory manager/scheduler/ master of the supply chain and interface with the supply chains of other companies. That’s where the smarts are needed.
I’d also say that anyone who thinks we’d understand the inner workings of a program complex enough to be an AI has never written any decently complex bunch of code. When you have processes communicating in unexpected ways, all bets are off.
I agree with you, Voyager. Any computer hardware/software worthy of the name “strong AI” will pretty much by definition be waaaay to complex to understand. The notion of a strong AI “overcoming it’s programming” is just nonsense, because you won’t be able to program a strong AI that way. It will have to be self-programming, or it won’t be capable of learning or remembering.
If you could insert a few lines of code into the computer so that it now is obligated to not harm any officers of Omnicorp, that isn’t strong AI. We won’t be able to “program” a strong AI any more than we can re-program humans.
Of course we will not be able to controll all aspects of a strong AI, but I remember how viruses (the biological kind) can change human behabior and drugs can do the same thing. You are forgetting we will know how to erase or pull the plug, one can then see that the development of strong AI will be a trial an error thing, anything that does not perform as expected will be erased, anything that behaves as expected will be reproduced.
Just for the record, the Tin Woodsman isn’t a machine that wants to become human. He’s a human woodcutter whose entire body got replaced by tin prosthetics after some unfortunate accidents with an axe.
And companies will go into bankruptcy, and the programs and unstable AI products will be dismissed by consumers or government regulation.
“It was remarkable for some 19th century writers to predict the automobile, but it would had been genius to predict the gridlock.”
I don’t remember who was the science fiction writer who said that more or less, but I think many are ignoring the forces and limitations any AI will have to deal with.
Forget cheaper, it will become necessary for survival.
You can disagree with his politics and nutjobbery, but the above logic is pretty hard to refute. The global economy and planetary ecology will continue to become more complex and will reach a point where humans simply aren’t able to properly administer it. At some point well before that, AI will be developed that will be better at parts of the administration than humans. We’ll start to lean on it more and more, always keeping a human in the loop, of course, because we don’t trust the machine.
But someday that person in the loop will contradict the machine, and many people will die. And the next person will be that much less willing to pull the plug. And at some point, pulling the plug will be like pulling the plug to a global pacemaker. Without the AI making the calls, the system just falls apart.
Not quite, for example the stock market crash of the 80’s was attributed to computers that were found to be more accurate than humans, problem was humans did not take into account that other machines would react the same and then we got black Monday. Because the crash was found to be a technical issue and not an economical one panic was avoided. A repeat to that has been prevented thanks to automatic triggers. (I have to insist here that there is no good reason given so far this would be impossible in future AIs)
As long as I don’t see a specific OSs or AIs of the future becoming a monopoly, I will think the Una bomber was Bananas.