What are your qualifications for calling my pointing out that it is impossible to prove a negative and that the burden of proof lies with the one making an extraordinary claim is nonsense? It’s basic logic.
Yes, the goal of AI is to further understanding of how self awareness works by mimicking it with a machine. That’s an illusion of self-awareness, sentience, and consciousness, not actual self-awareness, sentience and consciousness. The only place sentient machines even come close to existing is in science fiction.
Dissonance, please could you describe what essential feature(s) an organic brain has (or requires in order to be able to really think) that cannot ever be realised in some kind of artificial system?
If magic were known to exist, and if it was exhibited routinely by systems not fundamentally different from you, the way consciousness is known to exist and is exhibited by systems not fundamentally different from computers, then yes, that’s likely what it means. Otherwise, the analogy is just bullshit.
No one has claimed we’ve done it. However, given that we are self-aware, and given that the platform for our self-awareness is purely physical, you’d need a good reason to claim that it is impossible that a purely physical computer can do it. Demonstrating a lack of knowledge of advances in heuristics, such as genetic algorithms and simulated annealing, doesn’t cut it. Saying that computers can do only what they have been explicitly programmed to do shows a profound ignorance of the field.
If the goal of AI was to simulate awareness, there are trivial ways of doing this. Even Eliza fooled people when it was first written, and it was not intended to. But anyone who has ever written a simulator knows that you must understand the thing you are simulating to do a reasonable job of it, so understanding models of awareness is essential to creating computers that seem aware.
Whether a computer which can pass a good Turing test is really self-aware is as unanswerable as whether the guy walking past you in the street is.
Except if you were dealing with one of the lower end IBM 360 computers, which were microprogrammed. You can then change the microcode to make the registers mean something totally different, and give the computer a completely new instruction set. Today, there are processor cores which can be programmed the same way, sold by a number of companies. Even better, if you have implemented your computer with a bunch of FPGAs, you can totally change the underlying hardware by loading a new pattern in the FPGA’s memory.
There was even some thought about modifying the inherent structure of a computer to match the characteristics of the workload it was seeing, though I don’t think anyone ever did this. There was definitely a lot of work on customizing base architectures for given users, Burroughs had a whole line of machines like this 30 or 40 years ago.
There is a lot more to the internals of computers than you seem to know. An d, in any case, at the base level our brains are all electrical impulses. The interpretation of them, just like the interpretation of a collection of bits, is what is important.
Even without any direct hardware manipulation, it’s certainly correct to speak of a program as a machine in and of itself – after all, what programming does is to make a universal machine emulate a special-purpose machine (or, of course, a different universal machine), to the point that there is no functional difference between the emulated and the ‘real’ (though often only hypothetically existing) machine. That the machine is constructed from bits of code rather than from wheels and gears makes no difference with regards to its functional or mathematical description. If we don’t consider machines made from wood and operating with marbles fundamentally different from machines made from silicone and operating with electric impulses, there is no reason to consider machines made from code something fundamentally different, either.
So then, I’ll echo the question in a slightly different way from the other people here.
Say you have a true Turing test–that is, you are communicating via something like instant messenger with two parties, one of which is a computer and one is a human.
If you and a panel of experts (psych, sociology, computer science, linguistics, whatever) cannot tell the difference at a rate better than chance, then the computer can be said to have passed the Turing test.
Given that example, what criteria would you use to say that the human has sentience and the computer does not? How are you defining sentience, and what property of human biology specifically grants it to us that a non-biological intelligence cannot have?
Ooo, you can make the contents of the registries mean entirely different things by instructing the computer how to interpret the contents of their data. This differs from exactly what I said how exactly?
Now try to explain that without anthropomorphizing the computer through use of terms like “interpret” and phrases like “tells the computer what to do.”
No computer yet built has a mind which can interpret input. No computer yet built can be “told what to do.” To be told what to do would require understanding what one is being told. No computer yet built can do this.
I’m amazed, given your position on the nature of AI, that you would be so emphatic in your insistence otherwise.
Since you missed my argument the first time around, here it is again:
When a program is compiled, components inside the computer have become configured in a way that allows us to use their aggregated structure as a tool for carrying out some task we’re interested in.
Any set of components configured in a way that allows us to use its aggregated structure as a tool for carrying out some task we’re interested in is a machine.
Therefore when a program is compiled, a machine is thereby built inside the computer.
The argument is valid–the conclusion follows inexhorably from the premises. So which premise do you disagree with and why? (Just adding a “not” to a premise accomplishes nothing.)
You might notice that the analogy illustrates the ability to fool the observer into seeing something that isn’t actually there. Sentience outside of things that look remarkably like you and me doesn’t exist and has never existed that we are aware of. By your qualifications sentience from anywhere else is just bullshit.
What criteria would you use to prove that a bee is alive or that a toaster isn’t in fact a sentient being silently screaming due to the inability to interact with its environment? I don’t know what exactly it would take for me to be convinced that a computer is a living, sentient, self-aware being, but knowing what a computer is and how it works, writing a program for a chat-bot that could fool people half of the time into choosing ‘not-bot’ rather than ‘bot’ when deciding if it’s a human on the other end is a long, long, long way from proof that this program or the computer running it have achieved actual self-awareness.
Arg. You seem to think writing a chat-bot that could fool people into thinking it’s a human is a trivial exercise. Have you ever conversed with an actual chat-bot?
In order to actually “mimic consciousness” you have to do a hell of a lot better than repeat “Tell me more” and “Why do you say or the computer running it have achieved actual self-awareness?”.
As far as I can tell, you think it’s perfectly possible to create an artificial system that can converse with you in a human way on any topic you like, but the minute you find out that it’s an artificial system you’ll know for sure that the system cannot be conscious.
Except, what exactly is it about the human brain that makes you sure it’s conscious? You have a brain, and you think you’re conscious? Your brain is constructed of atoms organized into molecules organized into cells organized into tissues. Your brain is a machine built out of atoms. There’s no pixie dust anywhere in it. So how can your brain be conscious?
You just can’t stop digging. Take a basic course in programming, this is the terminology used. I am not anthropomorphizing the computer in any way.
Since you missed it the first time around: None of this is true. When a program is compiled into machine code, no components inside the computer have been configured as a tool for carrying out a task. A set of instructions understandable by the machine have been created from programming code, which is a language (more or less) understandable by the human who wrote it.
Since components haven’t been configured as a tool for carrying out some task, the compiled program is not a machine. Again, it is a set of instructions.
Therefore, when a program is compiled a machine is in fact not thereby built inside of the computer.
The argument is invalid because the premises upon which it is built are factually incorrect.
But how do the instructions work? Remember that things don’t have a purpose, instead people have a purpose for things.
You agree that a mechanical calculator is a machine, right? But an electronic calculator is just another way of implementing the same machine, just using different sets of components. So rather than having a mechanical arm that can be flipped one way or another way, we have a switch that can be open or closed. To add two numbers together we flip a bunch of switches open or closed, and then read the results somehow, like attaching a light to the switch so that if it is open the light is off and if closed the light is on, then we can read the lights as a binary number. Or am I mistaken?
Guess I must be on ignore or something. I’d like to know what property it is of the makeup of an organic human brain that is essential to sentience, that has no possible artificial analogue.
That terminology is used as a frankly anthropomorphizing shortcut in order to express exactly what I’m saying.
If no components inside the computer have been thus configured, then the only explanation for how the computer actually does carry out the task is magic.
No task can ever be carried out unless it be by either
A) Magic
B) Components configured in a particular way.
So then you agree with many Strong AI researchers and most SDMB particpants in this thread that computers can understand things? Like, really understand, and not just fool us into thinking they understand? Great! So why were we arguing?
When you get a chance, look up the definitions (in the context of logic and argument) of the terms “sound” and “valid.” Bit of a technical distinction, but it will help you make better arguments.
I don’t think it really helps to point out that Dissonance is using semantics that appear to contradict his/her own position. A computer ‘understanding’ machine code is not the same phenomenon as a person understanding a sentence.
However, I don’t accept that just because the parts of a computer are impersonal, they can’t form the environment for emergence of consciousness. As I and others have said over and over in this thread, the same argument can be made (and is wrong) about the impersonal components of the organic human brain.
One more time, before I give up. Dissonance, please can you say what property is present in the material and configuration of the organic human brain that is essential to consciousness/real intelligence in a way that could never occur in the configuration of artificial materials?
Absolutely it helps. He has not just used these terms, he’s insisted on using them and insisted that it’s incorrect not to use them. For this reason, it’s imporant for us to get clear about just exactly what he means by these terms–since, as you said, they seem prima facie to exactly contradict the position he’s arguing for.
He’s made a lot out of a notion that he understands how computers work and that others in the thread don’t. Fair enough–perhaps he has some authority on the issue in that sense. But if he can’t explain what is meant by the terminology he’s using, then it is to be doubted that he understand how computers work any better than anyone else here–and a major bit of support for his argument is eliminated.
Fair enough- I’ll stand corrected on that matter- it looked to me as though it was heading on that unfruitful direction where the parties in a debate insist that their opponents think or mean something other than they say they do - sorry for the interruption.
I agree there is some disparity between the assertions of qualification Dissonance has made and the thrust of his/her argument. Programming is a big field and expertise in one area of it does not necessarily imply understanding of the methods and challenges in another. And I’d say that division or contrast is nowhere more sharp than it is when cognitive modelling, AI etc is one of rge areas being compared.
I’ve read the entire thread, but this was the claim which caught my eye. Frankly, unlike the OP, I don’t much care whether computers have self-awareness (whatever that means), nor whether they can pass the Turing test. What matters to me is whether they can think, and one important evidence of that would be whether they can come up with something original which exceeds their programming. You’re saying they already can, at what has to be conceded is still a fairly early state of the art. Could you please cite a couple of online articles describing this? Mangetout and Half Man Half Wit say similar things, so cites from you also would be welcome. Or from anyone else, for that matter.
To be clear, although we’re in GD, this is a GQ request. I don’t have the background to challenge you or anyone else on this subject. I’m just interested and curious. Middling technical-but-accessible-to the-layman articles preferred (a la Hawkings on cosmology), but if all you have that fits the bill is hardcore, I’ll manage. Thanks.
The links HMHW provided in post #71 are a good start.
It’s important to understand what we mean when we talk about creativity and programs exceeding their programming.
What we’re definitely not saying is that we can design a system for which the outputs cannot be fully explained and understood as results of the internal states of the system - which states follow predictable rules. That might sound like it restricts AI to slavish observance of its explicit programming, but it needn’t.
And the reason for this is that, if thought/consciousness in humans is produced entirely by the human brain, then the outputs (i.e. the thought, the consciousness) is also fully explainable and understandable as results of the internal states, which also follow predictable rules. We can’t scrutinise the brain in that kind of detail now (maybe ever), but unless we’re going to introduce metaphysics (I don’t mind, but if we do, let’s be clear we have), then we have to regard the brain as a machine that produces thought.
And if one kind of machine can do it, then why not another kind of machine? - if that second kind is capable of similar kinds of basic operation.