Desire or intention are at best fringe emotions. When asking basic questions about emotion, using those examples is probably going to result in misleading or nonsensical conclusions. For general inquiry into how emotions operate, stick to the basic emotions: fear or anger, for example.
What I mean is, I think you’re trying to generalize based on a couple of edge cases that may be more accessible from your point of view, but that generalizing from edge cases might be particularly unhelpful.
And maybe I’m wrong about that, because maybe I’m missing a lot of context.
Clarifying my last post: Desire and intention are a little bit “nice to work with” because they are goal-directed - but I think their quality of being goal-directed tends to obscure the question, not simplify it or facilitate discussion or whatever. Or, OK sure it facilitates A discussion, but likely not the right one.
This discussion on empathy seems like a hijack to me, but here are the first two paragraphs from the definition section of the wiki page of empathy:
Maybe your position is that emotional states are different than emotions? In any case, I don’t see how you can have empathy without having emotions, since one of the primary definitions is that you experience emotions to match another’s.
Anyway, this is an interesting thread, but this really seems like a hijack to me.
I agree with you, the keynote might have agreed you with too, I’m not going to speak for him of course. But their end goal is emotional robots, they’re just starting with desire and intention as a first step towards that. Since he’s a world leader in human-like robots, I was wanted to ask him when he thought he would have that Eureka moment. What, to him, would be the dividing line between a robot having an emotion vs. simulating an emotion? Again, sadly, I didn’t get to ask him.
I’d say Alan Turing already figured that out, and AI proponents just don’t want to face it: the dividing line is when humans freely interacting with those robots and some other humans can’t tell the difference between robot conversation and human conversation. (tested by text communication, to allow for the fact that the particular robots might not yet look convincingly human)
No, I don’t think is as resolved as you think. Even Turing’s original paper discusses this, that the Turing Test is not a necessarily the dividing line between consciousness and not (he’s says something to the effect that it does not answer the mystery of consciousness). In fact, there has been some criticism for the past two decades at least that the Turing Test is a bar too high (that a true thinking machine could be something much simpler). Certainly with respect to emotion, it is entirely possible to have an emotional machine that could not pass the Turing Test, so I don’t think you can conclude that the Turing Test is definitively the bar for separating a simulation of emotion (let alone intelligence) from having an emotion. What is very likely true is that any machine that can pass the Turing Test is almost certainly a thinking machine, and would almost certainly need to have some deep understanding of emotion in order to talk on such matters to be indistinguishable from a human.
My belief is that numerical computers will never be self aware, intelligent and/or exercise initiative.
However they may well become, or currently are, better than humans at available tasks. I am told that chess is not a game of strategy. Chess masters internalize all of the possible board patterns and the correct move for each one. A computer does that very well. It was even done with a match box and colored beads ‘computer’ for http://www.atarimagazines.com/v3n1/matchboxttt.htmlTick-Tack-Toe.
The usefulness of a super intelligent computer relies on the quality of it’s program and the quality of it’s education. Education involves feeding the computer data along with the proper conclusions too reach from that data set. Just like school. Given sufficient data the computer will generalize among the data sets and reach conclusions based on what it was taught.
Is an altruistic intelligent computer possible?
Seeing how computers were only invented in the 1940s, we really can’t say what is or isn’t possible right now. Computers have advanced quite a bit in the last 70 years, and 70 years isn’t that long on universal time scales. In 70 years we’ve gone from the first computers existing, to wondering whether supercomputers will vastly outstrip us in all abilities in the coming decades.
But yeah, probably. If evolution can create altruistic, intelligent biological creatures then I don’t see why intelligent design of altruistic, intelligent computers is impossible.
For evolution to work, it must have a goal - produce numerous offspring.
A computer that had broad access to actual events would be educated by experience. It would still require fitness criteria in order to come to conclusions. Given those conditions a computer could function intelligently and altruistically. But, it would still be no more than a very big calculator.
An intelligent, intuitive, willful computer in a guided missile, when given it’s mission, might respond:
Absolutely. Anyone who thinks that it is impossible to design a self-aware, altruistic (or selfish, or both) and intelligent, human-equivalent computer is talking garbage.
However it is instructive to remember that it took evolution half a gigayear to get from single-celled organisms to the human brain; human designers aren’t going to replicate that chain of events in a few decades. Anyone who thinks that there will be human-equivalent AI this century is also talking garbage.
But there might be many, many types of AI developed in this century that are far more useful than human-equivalent AI. We don’t need to replicate human brains and bodies; human brains and bodies have ways of doing that already. What we need are competent, smart, user-friendly systems that can do things humans can’t. We’ve got a lot of those already, but we have hardly scratched the surface of the potential of smart matter yet.
It didn’t take half a billion years to evolve the human brain. Much of our brain evolution happened in the last 2 million years, when the brain went from 500cc to 1400cc.
According to Stephen pinker, in that 2 million years the brain could have tripled in size, shrunk down to its original size, and done that cycle of growing and shrinking several times had intelligent design been used. So using intelligent design, the human brain probably could’ve been tripled in size on a few hundred generations, possibly less. Using intelligent design and selective breeding, the traits of certain agricultural animals like chickens and cows are now in possession of traits that would have been 20-30 standard deviations above normal just fifty years ago.
Point being the brain isn’t ‘that’ complex.
Also it’s only a matter of time until machines are better than us at everything. And only a matter of time after that before they are superhuman at it. The only question is when.
From what I understand, there’s evidence that that’s actually the way that human emotion works: We mimic emotion so well that we actually fool ourselves. The actual internal state is close to one-dimensional, expressing something like “how strong are our emotions right now”, and that it’s only context clues that tell us what emotion it is. That’s why the old trick of forcing yourself to smile works: If you’re emotional, and you’re smiling, your brain interprets that as meaning that you’re happy. It’s also why we get angrier with those we love: It’s easier to turn love into hate than to turn apathy into emotion.
Then you were told wrong. For any sort of computer, silicon or meat, to hold all of the possible board patterns would require it to be many times larger than the Universe, even if each component were the size of a hydrogen atom. At best, you can store broad categories of patterns.
And eburacum45 has a point that there isn’t much work being done on making a computer that’s good at the things that humans are good at. There’s no point, because we already have humans. Instead, most AI work is focusing on making computers that are good at things that humans aren’t good at.
I disagree. There is a lot of funding and research into making robots do what humans already do well.
Hold a conversation, drive a car, walk, open doors, perform cognitive tasks, experience and comprehend emotions, understand social situations, etc. Those all come easy to us and there is lots of research into making robots do them too.
Within a certain period of time, and who knows how long it’ll be, there will be nothing biological humans can do better than machines. And the machines will still continue to get better, or at least be able to get better. There may be a limit on how much intelligence we need.
Same with cars and horsepower. You can keep adding horsepower and torque to a car, but to achieve the goals of a car, a certain amount is fine (and it depends on what goals the car is designed to achieve).
Maybe in 200 years we will have AI that is fifty times smarter than a human, but not ten million times smarter because we don’t need that level of intellect even though we could design one that smart. Same way a family sedan doesn’t need to have 1000 horsepower even though we could build one of those too.
Maybe we will find there is a limit on how much machine intelligence we need to achieve our goals.
I agree with the first and last paragraphs. I strongly disagree with the middle paragraph. The pace of biological evolution is completely irrelevant to the pace of technological development. Biological evolution is essentially random and isn’t directed. But the very concept of science as we know it is only a few hundred years old, and it has set in motion an exponential pace of development. Computers themselves are not much more than half a century old, and computers that can execute better chess strategies than virtually any human, understand natural language, and solve complex problems in specific domains of intelligence are hardly more than a decade or two old, if that. Each decade now sees far more advancement than any previous one, and the pace continues to accelerate.
If we’re far away from having computers with highly general human-like intelligence it’s only because, as you correctly note, we don’t need them so no one is trying to develop them. But we will have AI systems that greatly exceed human intelligence in a rapidly growing set of practical problem domains, with both the benefits and all the commensurate risks that futurists have noted. And we’ll have them well within the century, perhaps in just the next few decades.
Agreed, but the architecture of a self aware computer will be far different from the current numerical systems. So different that it will probably not be called a computer.
I disagree. It’s possible that some AI systems may eventually have different architectures from conventional digital computers, but digital computers are not fundamentally “numerical systems”. They are fundamentally information processors, and at the finest level of granularity they are symbol manipulators. At least some if not all of our mental processes also involve symbol manipulation and in that sense are computational. Self-awareness can be posited to arise from computational systems if one believes that it’s an emergent property of a sufficiently capable general intelligence.
At the finest level of granularity a computer is an ALU - a device that will perform the 4 basic computer operations (and a carry): ADD, AND, OR, XOR. The computer shuffles data through the ALU and tests the carry bit, everything else is software. These operations are all serial. At any instant there is a single defined state for the entire computer. That state will illustrate the combination (ADD, AND, OR, XOR) of two binary numbers and the state of the carry, nothing more.
Your syllogism switches middle terms. Self awareness can be posited to arise from computational systems if one believes that general intelligence is a property of computational systems. The ALU is not intelligent. Increasing the rate and amount of data it processes doesn’t make it so.
The transfer function of a neuron is not fixed and it is not programmed. It depends on the current state of it’s inputs and it’s recent history. Neurons function in parallel, creating electrochemical waves that flow across the brain. There is no equivalent in current computer architectures.
Responding to your three paragraphs above, in order:
On the first point, I’m not talking about how logic gates work. Logic gates don’t define a computer architecture. A processor can be built with many different kinds of gates in different configurations for different performance levels, including with only enough hardware to support a microprogrammed instruction set, and they could all have the same architecture. The architecture is defined by the interface it presents to the software, i.e.- the instruction set. And the instruction set fundamentally processes symbols, as I said before. It retrieves and stores bytes to which the program assigns semantic meaning. This process is not intrinsically “numerical”, it’s strictly syntactical operations on symbols that happen to be represented as a string of bits.
On the second point, right, none of the hardware components are intelligent. Neither is an individual neuron in the brain. But in fact the data the computer contains – in which I include the program code – and the rate at which it processes it are prerequisites to intelligent behavior. A computer running a suitable program can, in fact, be regarded as an intelligent system if it does intelligent things, like play a grandmaster level game of chess or win at Jeopardy. And there is, indeed, strong support for the idea that general intelligence is inherently computational.
On the third point, this is the fallacy of type-identity theory which holds that the minutiae of the brain’s mechanics are somehow relevant to explaining cognition and intelligence. Central to the widely accepted computation theory of mind is the opposite idea, the principle of multiple realizability, which says that cognition operates on symbolic computational principles and that these computational paradigms are independent of any particular implementation. It’s not saying that the brain is literally a digital computer, but rather that its cognitive functions are capable of being modeled on one.
I’m sorry, this is totally incorrect. If you have been involved in computer design, or seen a floorplan for a microprocessor, the ALU takes up very little space and is relatively simple. Not to mention that multiplication has not been done by repeated additions for maybe 50 years. CPUs have hardware multipliers built in.
Most of the area in processors today involve the cache and hardware to deal with on-chip and off-chip caches, instruction execution logic, instruction sequencing logic, the interface with the rest of the world, dealing with the pipeline, etc. etc.
The state of the computer has very little to do with the ALU, since the registers which are part of the ALU are a trivially small set when compared to registers and memory in the rest of the machine.
As **wolfpup ** correctly said, symbolic manipulation is what is mostly happening, not equations. The large data analysis stuff I wrote had a tiny amount of math in it. And the control flow of the program is more important still.
Intelligence is not going to arise from computer systems in the sense that if you keep building bigger ones they will someday become intelligent. That’s a fallacy way too many sf writers have committed to get a character who is a computer. That does not mean that a computer designed and programmed to become intelligent - possibly through self analysis - won’t be.
As analog to digital converts show, you can perform actions based on any level of precision of the inputs depending on how wide you make it. Most stuff inside computers happens based on the state of the subsystem, which of course can include history. Gates work in parallel, of course. And while there are no electrochemical waves, there are plenty of global signals that go all through the processor - controlled, of course, since they are a pain to lay out and get timing right on.
Radio stuff is the essence of analog - but most radio processing inside chips today is done digitally, since that is easier to implement and smaller than inherently analog subsystems. That’s an existence proof of how analog stuff like what goes on in our brain can be modeled by digital stuff.
A tiny quibble - you are speaking of instruction set architectures here. But there are other levels of architecture which can be very different for the same ISA - with System 360 being the classic example.
I’m a former architect so I get to quibble. The rest is spot on.