I am a pacifist and militate against any form of violence. However, my intention in this thread is to solely comment on the idea put forth in the OP - computers can become so intelligent that they will act on their own - and express my skepticism (to put it mildly).
You are reading an enormous amount into my response that is not there. I didn’t say anything was impossible. I have no idea where you are getting that.
I was responding only to your suggestion that just because your phone prompts you about things you might be interested in, it is somehow exhibiting initiative or free will. Your phone did not assess the world and decide that it might be helpful to tell you about things it thinks you are interested in. It is doing precisely what it was programmed to do, and that is not intelligence or free will. It’s just pretty good programming.
Is there any attention paid now, to the type of idea underlying The Emperor’s New Mind by Roger Penrose? Essentially, “You might as well stop worrying, the kind of AI you’re imagining can’t exist in the future, here’s why: [insert long book here]”
There is some question as to whether even humans have free will, or whether it is an illusion.
People are working on that. There was much presented at the conference in Japan on this very subject. Currently, it is very limited but it will certainly improve.
Well, this is pretty incorrect. My own research, and I’m hardly unique, is on AI algorithms that infer other algorithms, and this is seen as one of the potential paths to general AI. If you think about it, the human brain is a wet implementation of an algorithm that creates solutions (algorithms) to problems in some general way. There’s plenty of reasons why you might want to do this.
There’s already plenty of machines that can solve problems better than their designer, unless you’re thinking that the programmer co-opts the found solution by being the one the coded the AI in the first place. I guess it depends on how you’re defining think.
(missed edit window) Although, as I’ve said in many other threads, the current state of AI is mainly a computational tool that enhances human intelligence. Certainly, there’s no expectation that general AI is just around the corner (although I’m trying very hard to make it so, I don’t hold much hope of real success in this regard. I just hope I get cited by the person who eventually does it!
). To some degree though, it appears almost inevitable as progress continues unabated towards an AI person (a general AI with desire/intention). But who knows, there could be a truly insurmountable hurdle to general AI, but I doubt it.
I would not say that it is theoretically impossible for this to happen, but neither would I agree that it is inevitable. Not “when” but “if”.
The Singularity first assumes that a machine can make a machine more intelligent than itself. So humans (really just a wetware machine) could make a more intelligent computer, which could make a computer even more intelligent than itself, and you get this explosion of computer superintelligence that then takes over the world and extinguishes the human race.
As described by the Singularity theory.
I happen to lean towards biomechanical reductionism. Anything that a person does can be described in terms of biological processes. Emotions correlate to chemical processes. The more we learn about the brain, the more we learn about how people think and how we make decisions (we often make decisions before we are even aware we have done so). There is no reason to think that this complexity cannot be recreated in the form of a computer. It seems to be that the barrier is one of understanding the complexity of the brain (we are not too far along with that) and our ability to create comparable complexity with comparable processing speed in a machine (the technology may be developed one day, but I am in doubt as to whether we can harness the technology to make something more intelligent than ourselves.) There would be the question of consciousness, because we don’t really understand what it is in humans much less know whether a non-biological entity can have it.
Here is a thought experiment (which I did not invent; it was from a Discover magazine article which I can’t find a cite for). Suppose we know the exact mapping of every neuron, dendrite, synapse, axon (and whatever else, I’m not a neurologist) in a brain. Suppose we also know how to make each of those things artificially. Now we replace one neuron with a silicon (or whatever) duplicate. Is that person still human? Do they still have emotions? Are they conscious? Now continue the replacements, until the entire brain has been replaced by artificial parts. Is that person still human? Do they still have emotions? Are they conscious? If so, we have created an artificial human that is indistinguishable from an actual human. If not, then at what point in the replacement process did we cross the line? It’s a Theseus’s paradox problem.
(I will add that in reality an intelligent machine is not simply a part-for-part replacement of a brain, it’s a whole different thing. But the principle still applies.)
By the way, could you link to the discussion page of the science website you are talking about?
Humans have been making labor redundant since, well, before we were humans.
It actually has had the opposite effect to making us unable to provide for ourselves.
This doesn’t mean there aren’t serious challenges ahead but your argument is built on a flawed premise.
There is the popular misconception that sociopaths lack emotions, a notion with no basis in fact. In fact, while some people may be particularly adept at concealing affective display (emotional behavior) it is literally impossible for any human being to not have emotion. The affective systems in the brain are in some of the most primitive areas of the vertebrate brain architecture, the limbic system (sometimes referred to as the paleomammalian complex although essential elements of the limbic system are present in all vertebrate animals) and affective responses underly all conscious thought, being an essential element in memory formation and the processing and integration of sensory information.
It should be understood that the current approach to machine cognition is based upon heuristic behavior; that is, learning by pattern recognition and repetition which develops ‘preferred’ pathways in ‘neural networks’. Although this is very crudely analogous to how the brain learns, our brains come pre-baked with a wide array of responses that are not learned (although they can be modified by later experience). No machine cognition is assembled with anything resembling the vertebrate affective systems, and in fact it isn’t clear how you could build such a thing into a machine intelligence; affective responses are heavily driven by neurotransmitter interactions which modify the potentiation of neurons to produce emotional responses. We understand relatively little about how this works even in the human and other mammalian brains, and there is no attempt or even a credible methodology to replicate this in a machine system, nor is there really a practical use for it in application.
What an “AI psychoanalyst” would require to interact with a patient is not emotion per se but empathy; that is, the ability to interpret and reflect or complement the patient’s emotional responses sufficient for the patient to believe the analyst ‘understands’ them. Current systems are terrible with this at even a cursory level of basic social interactions, and while there is a lot of research going into how to make interfaces that better reflect human responses, the thrust of the efforts is to produce an interface that reflects the user’s own empathic focus upon rather than trying to replicate some kind of internal human-like emotional processes.
In fact, in the vocations requiring empathy and human contact, it is unlikely that machine intelligence systems will replace other humans in the foreseeable future. A physician or psychotherapist may use an expert machine intelligence system as an aid and tool, but a system that could wholly replace human contact is beyond anyones’ practical expectation in synthetic cognition. In fact, when machine intelligence systems become ‘smart’ enough to display sapience, they will almost certainly ‘think’ in very different ways than we do, and their ‘creative insights’ may not even make much sense to us outside of practical applications; researchers are already finding this to be the case with machine learning systems where they do not understand how the models that such systems make internally actually work.
While that is true in broad strokes, there is something more unique going on here. Past innovations resulted in machines that could physically do more than a human, or allowed for greater communication and permanence of data, or performed rote calculations faster than a person, but none of them replicated or replaced intellectual labor or developed novel insights, and they could not function without direct human action upon them. But generalized machine intelligence systems with autonomy and unaided interaction with the outer world are a novelty we’ve never previously dealt with; we’ve had cars for over a century, but never a car that could decide what route it should take to optimize travel.
The danger here is less than autonomously piloted vehicles will suddenly turn into murderbots which stalk and kill their owners (outside of a bad Stephen King movie, anyway) but that we’ll rapidly cede those intellectual capabilities to our machines and lose the ability to exercise them. Just as the skill of doing mental arithmetic has eroded in an era of ubiquitous calculation devices and credit that does not require making change in cash, the skill of navigating local geography could rapidly erode as people simply rely on an autonomous vehicle to take them wherever they direct it. When skills like writing or critical analysis become things done by machine intelligence ‘assistants’ the fundamental skills of thinking and basic cognition may erode as well without a concerted effort to maintain them. “The fault, dear Brutus, is not in our stars, but in ourselves, that we are underlings.” When we subordinate our fundamental intellectual tasks to machines, we may fall into the trap of becoming as weak and indolent mentally as modern industrial society has rendered many of us (who do not engage in athletic exercise for fun or competition) physically.
Stranger
Sorry I misinterpreted you. However, I still disagree with your second paragraph. When that Google Go machine defeated the grand master, the only programming was creating a learning machine. As far as I know, the programmers have no idea how the system developed the various strategies, nor could they reproduce those strategies using more traditional programming methods. It’s good programming, in that the learning algorithms were good, but they didn’t know how to program it to play Go – it figured that out itself. And, that’s not AI either.
And, my phone did assess the world in some sense – I agree that it’s programmed to do exactly that, though.
You were responding to my response to another poster, who seemed to claim that an AI couldn’t exhibit initiative or will. I think that’s false. Since you seemed to be arguing with me about it, I assumed you agreed with the original person I responded to. How is it relevant whether the initiative or will is programmed in by a designer or comes out naturally from a learning algorithm? I guess I don’t understand your first response to me, now that I look again.
I’m not sure whether you’re agreeing with me but explaining the current state of the art and how far off we are, or you’re disagreeing with me, or what. So, my only comment is that empathy is an emotion – understanding the feelings of others. So, saying an AI psychologist wouldn’t need emotions but would need empathy doesn’t make sense to me.
I don’t disagree with you that any of this is really far off. I just think that, in principle, it’s not an impossible task.
I think that’s the very definition of AI.
I didn’t take the entire context of your post into account so was responding to it at face value. I still maintain that your phone doesn’t have initiative or will. It has never had an original thought. It is an automaton that is following a set of instructions.
We can talk about whether a machine can be made to have the capacity for initiative and will comparable to a human, but then we need to debate whether even a human has initiative and will, or whether we ourselves are just massively complex finite automata that act with complete determinism.
Fair enough, but pretty weak AI. The OP is talking about a much stronger version of it, I think.
I feel like we very much agree on all of this. My complaint with other posters is that they seem to be saying that computers would never be able to do these things that humans currently can. I say that if they think that humans have initiative and will, they should agree there’s no reason computers couldn’t have that, at least in principle. However, if humans are simply automatons (and I agree that we may be), then there really, really is no reason why computers couldn’t act similarly.
If you want to get picky (and on this board I think it is practically a requirement :)), it would be the definition of machine learning (ML); however, AI and ML have become nearly synonymous because most, if not all, of the best approaches to AI right now generally involve some form of ML. But, technically, there is a difference.
I am not going to debate mechanism (the belief that natural wholes are like complicated machines or artifacts) with its main directions, universal mechanism and anthropic mechanism. In my opinion, self-evolving living things are more sophisticated than machines but if you believe that human beings are complex machines, fine. On the other hand, I don’t see why one would contest a computer’s ability to act like an automaton - after all, computers are machines.
What I doubt is computers’ ability to emulate human beings. The most sophisticated computer ever manufactured cannot even dream of reaching the complexity of a single cell organism. The human body consists of 100 trillion cells and carries ten times as many cells in the intestines, human beings’ second brain. The gap between the complexity of a computer and that of a human being is so astronomical that to imagine computers endowed with the biological traits of humans is simply science fiction.
Do the Japanese robots actually have something akin to subjective emotions, or are they more mimicking the body language of emotions? I’ve seen robots that do the latter (that make facial expressions of remorse or shame), but can that alone be used to curtail an AIs behavior so that it remains pro-social?
Also is China finally starting to come into their own in AI research, or is it still mostly the west and Japan who are leading the research?
Empathy itself is not an emotion but an ability to comprehend and reflect or complement the emotions of others. There are plenty of emotional responses that are not empathic; in fact, the inability to properly regulate emotion, or expressing strong emotions such as anger or fear inappropriately and in a way that unintentionally causes anxiety in others is the opposite of empathy. A truly empathic person can actually act as an external regulator by soothing emotional outrage (sometimes to their own detriment as they can become a facilitator for acting out) even if they don’t share that emotion themselves. It may be possible for a machine intelligence system to simulate emotional responses, perhaps even well enough to fool a person interacting with it on a casual basis, although I think doing so to a degree that would be considered empathic is unlikely at best, and I think artificial intelligence “psychologists” are a very long way off, particularly given that the field of psychology is essentially pseudoscience (and psychopharmacology and psychiatry are barely much better at consistently predicting or modifying behavior).
However, actually having the internal experience of an emotion is (likely) not feasible without recreating all of the lower level architecture associated with affective systems of the vertebrate brain, and that is not an approach being applied by anyone working in artificial intelligence and machine cognition, nor do I think it likely that it can be simulated in software on top of digital hardware; while the brain works on essential physical and biochemical mechanisms without any apparent inexplicable mechanisms or élan vital, the complexity and inherent plasticity of the brain make it essentially impossible to be simulated in totoby a deterministic finite state machine. The brain is such a vastly interconnected system (the notion that you only use “10% of your brain” or that certain types of cognitive activities are strictly localized are not true) that simulating on any thing much less complex than the actual brain itself is an essentially pointless exercise, and even a simulation as complex as the neural connectome does not have sufficient complexity to represent other interactions within the brain.
While I think machine cognition on digital hardware can develop at least a primitive form of sentience (and arguably already do, at least in the sense of being able to self-locate within their environment) and can find or make patterns that are not apparent to the human brain, I expect at some point that artificial intelligence will have to migrate over to some kind of synthetic neurobiological substrate–in essence, an artificial brain–that operates very much like the brain in order to move beyond the limitations of heuristic algorithms based upon neural network approaches and achieve any practical degree of sapience. Whether emotion is actually necessary to for sapience is unclear; in our brains, emotions serve as a kind of pre-cognitive modifier which can predispose you to interpreting sensory information in a certain way or to put you in a certain “frame of mind” to think about a problem. However, they often cause a lot of problems, particularly when there is emotional dysregulation or mental illness, such as grief, anxiety, or depression, which interferes with higher cognition. Whether we could make an artificial brain that could produce human-like emotions is an almost complete unknown, given how little we understand about how our brains form and regulate affect.
Stranger
For robotics, from what I’ve seen, the Japanese are head and shoulders above the rest of the world. Some of their stuff is just amazing. They showed us an emotive robotic conductor, and they told us the story of with the initial version the human musicians wouldn’t follow it very well (if at all); however, once they made it emotive, the musicians followed it readily. The big push right now for them is desire and intention, and I’m pretty confident that if they’re at a point where they think they be looking at this in a serious way, they’re probably going to make some real progress. So yes, the current state of the art is about mimicry of the emotional expression, by associating an expression with a feeling and associating some dialogue with a feeling, so the robot can tie it all together. You mention China, and China is a strange beast because they have some great work coming out of there but then you have some stuff that makes people leery. But China, as China does, is definitely putting a push to be a big leader in science and trying very hard to get some big conferences to come to China. They’re trying very hard to attract western scientists (I’m a nobody and I’ve had two serious job offers from China, both in Beijing so I turned them down… too much pollution).
With respect to desire/intention, I actually wanted to ask the keynote what he believes is the difference between an AI simulating an emotion rather than having an emotion (i.e. such as a desire or intention)? Sadly, my question was next and we ran out of time. Nooooooooo!
I would really hesitate to classify empathy as an emotion. It doesn’t behave like any other emotion I can think of, and it seems more like a method or skill for indirectly perceiving emotions. What does empathy have, that makes you classify it as an emotion?