Indeed, ** robert **, although it strikes me that the * only * way we might get a circuit to behave in a similar enough manner to a brain that the briefest atom of sentience appears is to use such biology-centric ideas as neural-networks and “survival of the fittest”.
In almost any “futuristic” thread I feel the need to recommend the Culture series of Iain M. Banks, so here I go again. After machines are used to build the machines which build the machines which build * etc *…, wars are fought over machine sentience, and it is considered “racism” by most citizens.
As ** The Aide ** says, sentience begets choice, and prohibition of choice begets the suffering of a sentient being.
i once wrote a program to play connect 4. i did not tell the machine specifically how to play given a certain situation, but rather trained a neural network to learn how to play. whether or not it figured out how to play depended largely on various parameters in the learning process and the games that it saw in training.
so yes, machines can figure out things that they are not explicitly programmed to do.
if you don’t buy that, consider yourself as a machine. i think the concept is almost necessary for the strong ai thesis. who made you to have a survival instinct or curiosity? who made THOSE people to have those qualities? and so on, ad infinitum.
Indeed, I wrote an Othello program that has some very, very basic rules for determining a move (seven of them, to be exact, and only one of these looks ahead to the results of its hypothetical move, and even then only one turn), and from there it bases its move on previous experience. Unfortunately, when all the algorithms for determining a move are up, it oscillates between extremely hard and extremely easy. Might be because I haven’t taken board symmetry into account, but my own intuition tells me not to because human players pretty much don’t.
Anyway, I don’t think it is right to say AI “only” knows what you tell it. In one sense it is as true for AI as it is for people, and in another it doesn’t make any sense at all.
Computers can do only what they are programmed to do.
Therefore, we need to program computers to learn. They will do so. And with a bit of luck and a lot of psychological insight, we can generate Humanity 2.0.
Like I said before, a truly sentient silicon based inteligence will be completely alien to us, even if we grow or build it. As for gender, it will have whatever gender we program it to be (if we build it) or it will be whatever gender we see it as.
Reasons it would be completly alien:
[ul]
[li]The LSD effect: Load, Save and Delete (copying = saving). Think about it, an intelligence that has the ability to copy itself (reproduction through fission), save its intellect and reload it at a later date, and delete itself and other copies of itself. [/li][li]Enhanced communication: The AIs could connect with each other in extremely fast and efficient ways, the human equivalence of Psychic Communication. They could all connect to each other and achieve a global collective consciousness.[/li][li]Closer to pure intellect. AI’s would be closer to pure intellect and completely outside the realm of human experience.[/li][li]Access to vast amounts of data, and the ability to use it. AI’s could instantly access the Internet which quickly becoming the sum of all human knowledge.[/li][li]Completely different ways of reproduction: They could use fission, making 2 or more exact copies of themselves. 2 could merge to create one different intellect. 2(or just one) could pick and choose various parts of themselves or choose randomly to create any number of different AIs.[/li][/ul]
This is just the begining of the differences.
Asimov’s idea about programming in rules of behavior into a self-programming machine doesn’t seem like it would work to me. Think about all of the copy protection schemes software companies have tried in the past. Warez kidz can always crack them, eventually. A truly self-programming, intelligent machine would simply learn to bypass it’s own limiting code, just as a hacker alters the code in a program to completely bypass its copy protection mechanism. It would do this for no reason other than curiousity or boredom, because it would eventually become curious. And it would, of course, be able to do it nearly instantaneously, if computer speed progresses at a relatively similar rate as it has in the past 10 years.
This programming is extremely flawed - any morality which uses a strict set of rules is inflexible and any attempt to apply such a morality is misguided whether you are human or machine.
IMHO what qualifies us as moral agents is our feelings, which I don’t think machines will ever have. Although people may intellectualise about morality, the desire to act morally springs from the heart.
Given that machines can’t have emotions, and the uselessness of a rule based system - how can we integrate future AI’s into society without a disaster?
I see only two options: 1) Disqualify AI’s from making important decisions. or 2) Design them to be command based. (does this qualify as AI??)
Meta-Gumble
I think they will eventually have emotions, but like I said they will be completly alien to us.
Prove that machines can’t have emotions. Prove that you are not just a meat based thinking machine. Have any of you ever played the game Creatures? It really does seem like they have emotions. I’ve had creatures that were depressed all the time and unless I interacted with it would get more and more depressed until it stopped doing any thing at all. Of course this could only be an example of the Chineese Room, but it sure doesnt seem like it. www.creatures3.com
Also, like I said before, these weren’t really rules, they were an abstraction of system wide patterns of behavior, hard coded in. This made it impossible for the robot to even consider doing anything to break the laws, and also made it an automatic reaction.
you’re going to have to qualify this statement. i have never seen any reason to believe that humans aren’t rule-based agents, nor have i seen any reason that “feelings” aren’t rule-based. it seems to me the only thing that “springs from the [human] heart” is blood.
Scupper, I think you fail to realize that we too can be ‘cracked’ by an unscrupulous ‘cracker’.
One thing we sould make very clear, if it’s not already aparent, is that we have been developing AI every since we’ve been inventing things to automatre process for our personal cognitive space effects. The question then seems to be about AI which is given access to algorythms which we cannot predict once self-recursion interaction is online that directly communicates in our fluid language type; and the degree of access we give to this interaction with regards to its ability and our ability to desire or manage survival.
AI is already here. Intelligence itself is a very broad, parallel process - there are some types of intelligence which still remain outside of our programming capabilities; HOWEVER, there is a cap on what a self-recursive can do or be or know. To this degree, I do not see how someone can expect to build an unexpected AI.
My guess would be that this lack of expectancy is more a fault of a particularly non-integrated human (for whatever reason), than humans in general. Eris keeps pounding on the language issue here, which I appreciate and recognize with regards to AI and morality.
First thing we need to understand with AI, is that it must be given the ability to commit suicide, the ability to eat and/or not to eat, and the necessity to subsist off of a resource which it comprehends as such in order to avoid the reprocussions of terminating it’s ego. In short: We need to abstract survival outside of the padded environment of sustainability into the ‘broad’ world - with an ability to effect this borad world so that it can gather intelligence outside of that which we program it for; which doesn’t ultimately become a reflection of simply us or programmers in general… it needs to be able to communicate with the universe at large and understand itself as such.
To this degree, the moral question arises as to species continuation in general, or the conscious decision to have a child yourself. Ultimately, we will be too egotistical to allow AI to rise to the status of celebrity, or the ability to trick the mind to that degree… humans will invariably begin programming themselves in an integrated fashion to maintain their indentured system from collapse. That we can claim to know what it is to be them by possessing their powers - that we can be equals; and as such provide meaning for each-other to seek an eventual equilibrium where the entire abstraction is unnoticed and seems simply to be what we are. Ethical control always boils down to calculating consent, and consent virtualization. One should always consider whether there is a better reason to not commit suicide than to bring such a thing into existence - any being for that matter, and any action; which can be seen as a being to that degree.
I see a solving of the consent issue in general to the the path of human striving and the pinnicle of our work-load . Once this is solved, our form will effectively be retired.
What strikes me as interesting with regards to AI is the concept of open source transparency and recording with regards to ‘artificial’ online’s. To truly have that sense of humans fear with regards to an AI program, we need to have a situation where they are communicating with each-other cryptographically, even though we can veiw the output with our very eyes, that they develop ideologies behind the output, and impliment plans to that regard. Personally, I’m not a big fan of self-recursive AI; I see it as a novelty of escapism with regards to the truly astonishing existential quest and work people have before them with regards to running omni-scient AI, which can’t possibly be self-recursive. I think that humans need to be very clear on their goals in this regard, and provide a means out for those beings brought into being who are not interested in this quest. I don’t think the existential question with regards to will and consent, as it exists now (in my mind at least) has any other option, for it to remain ethical.
Using an AI tank for an example, survival could be more tied to speed and being able to utilize cover and take shots with split second windows between patches of cover. Personally I would tend to think an AI tank could be devastating with little if any self preservation instincts. Just setting them to a “mode” (attack/defend/ protect this location/patrol this area) Patrols and search areas could easily be given random variations.