As to BurnMeUp:
I’m sorry, but I cannot read, out of a good deal of this post of yours, anything that adds up to saying anything at all, and in those cases where you say something about computers/mechanisms it does not at all represent the present state of the technical art. To wit:
/
Nano, you are missing the point entirely. I was merely trying to demonstrate the fact that conciousness is outside the grasp of non-sentient beings.
_______
What does this say? That you are defining non-sentient beings as not have consciousness? My dictionary defines ‘sentient’ as “Having sense perception; conscious.”
/
Any organisim (or machine for that matter) that requires informtaion to be input into it, outside it’s realm of control to do all of it’s “cognition”
_______
Now how the devil am I supposed to make anything of that phrase (from the first comma to the second comma, which is missing at the end of what’s written above. If you’re trying to say that software today is not sophisticated enough to get its own information from sensors (=senses), and modify its routines to ever more sophisticated complexities dependent upon what it senses, be it weather parameters, where its wheels are or what you’re telling it, you are only at least some 50 yr not up to date.
/
is not really a conciousness. Once you rely on someone esle to provide all of your sensory input,
________
Machines can be set up to acquire whatever portion of their sensory input as is appropriate. You must realize that.
/
and you are regulated by a set of rules, no matter how many “layers deep” you are still nothing more than a non-thinking entity that regurgitates premade forms, no better than a calculator.
_______
You are here simply defining “thinking entities” (which presumably is supposed to include you) as not analyzable into layers of control. Look, you take a squid, whose nervous system, I believe, has been totally mapped as a few 10s of thousands of neurons. You set genetics going – OK, you have to backtrack a bit, because we’re not direct descendants of the squid, but you get the idea: Your brain is nothing but some 100 billion neurons which functions on layers of control built up from your past experience slammed onto a genetic kernel. You think you have magic in your head? So we haven’t built robots anywhere near as sophisticated as our brains, but that just allows an issue of degree. Robots today can train to a pretty sophisticated level. How are you to pick a magic number below which, as applied to the layers of control, separates the “sentient” or “conscious” entity from the not such?
/
For a true conciousness to be formed it can’t just be programmed with definate answers and definitions that do not evolve from a common denominator without limiting it’s growth and thought potential.
_______
For something like 50 yr, computers haven’t had to be programmed like that. There programs can flex just like your neurons, particularly if they have ANNs artificial neural nets) within them. And sensors connected to them can influence such flexing, and effectors can affected what’s out there to be sensed. . .just like yours do. . .or better. 
/
for example, when you are a toddler, you learn, from your parents or other environment
that a flat topped device with supports is called a table. By then witnessing other items similar to that table you can also group them into the common grouping of table. Now later you may learn that there are different types of tables, or different styles, but by learning your first definition, you can still identify them as tables by dedution.
A computer only knows that a table is what you tell it a table it. For each new type of
table you have to set new parameters and tell it “this is alo a table” it cannot (currently) make the connection and deduction itself.
_______
No way! What I’ve been telling you is that computers can figure all those same sorts of things out. It was done way back with conventional AI, and has since been done much better with ANNs. I forget the various names used for such organizing, but its all just stereotyping. And the computer can come up with some other categorization, instances of which you might, in some its forms, claim are tables, while in others of its forms, desks, say – if that computer thinks it can do a better job of dispensing office equipment, say, with such cross-categorization. From what I see, I can’t imagine there is anything you could come up with that would seem a reasonable disqualifier of the robot capabilities of today from “thinking” – whatever way you would define such process.
As to JoeyBlades:
“Who can say? Neural networks were invented through studies of the brain. The inventors
looked at human brains and saw that the neurons were organized in a network like fashion.
Clearly the neurons were not communicating with binary weightings between nodes. So we
dreamed up a computer model based on a theory of how the brain might be processing
information. This model EMULATES the way we BELIEVE the brain processes. . .”
They have made various kinds of analog nodes also. I don’t care if they make them out of K’nex pieces and they run at turtle speed. You are apparently defining ‘think’, what can be said to be ‘conscious’, etc. simply as what is built exactly like it is in humans, or at least as in the highest animal you want to empathize with. It seems to me that people more naturally “think” about “thought” as a functional process, and that’s how I would like to talk about it. Defining it as what natural neurons do is like defining ‘heading a state’ as what a president does. . .and then, in the case of the present such entity, finding out you have to include. . .well, playing the saxophone.
Granted that no ANNs (at least that I’ve heard of) are today anywhere near as complex as typical human neurons. Maybe you would claim that a phonograph or a hunk fo semiconductor memory containing musical information in MP3 format run by software in an electronic gizmo with a speaker or earphone cannot play music, only humans can play music. Guns are too unconscious; only people kill people. . .right? WRONG! Open manholes kill people without “emulating”, “simulating” or otherwise mimicking death wishes. But if you can empathize with a gun or a manhole. . . I say the objective functionality is only a matter of degree, and an MP3-player comes pretty damn close to the equal of the functionality of the human in playing music, yet the implementations of the corresponding renditions, in a case where the human plays a dijiridoo and the track on the MP3-player is recorded from a dijiridoo played either by a human or an automaton, are exceedingly different. So I would say ‘thinking’, as with ‘playing music’ can be seen to have an objective meaning sharable with whatever provides and an equivalent such functionality as that of the human, and that ‘emulation’ or ‘simulation’ or ‘aping’ serve no purpose in being interposed modifiers. It’s just that the MP3-player’s music playing would be seen at present as of a high level of the art, while the computer’s/robot’s level of the art of thinking would vary in its level of equivalency to that of humans, dependent on what kind of thinking is being thought of.
And I think of human thinking as including thinking that thinks only of unemotional thoughts like adding two numbers or deciding whether its faster to get from SF to NY via Chicago or via St. Louis, or whatever. Granted that computers today don’t have to eat to replicate their cells, hide from other preying machines or engage in any activity to replicate wholes, which bioimperative activities in humans have required emotional accessories in the brain.
Given the above, one would be faced with an attempt at empathization with the computer to get to its ‘thinking’ as defined sub