So this argument goes on forever. All it arises from is basically two kinds of people – those that think there are two kinds of people and those that don’t. . . No, I mean, those that fear the loss of (hu)mankind’s worth, or feel it humanity is a goner at such moment as it be viewed as part of a continuum of molecular evolution or as its behavior patterns become indistinguishable from those of inanimate artifacts it should construct, i.e., silicon-processor-controlled gizmos or whatever – versus those who aren’t on that trip, hangup, religion, or whatever, and just hang around this universe, having fun conjuring up attempts at such gizmos or just speculating in free thought.
Since Nickrz seems to be labeled ‘moderator’, I guess I should watch what I say here though. . .or my post won’t see the light of day. He may deem it just communicational artifacts due to sunspots.
Well, to me, most of these chasms of argument arise merely by viewing the same things differentially between the dual aspects of the subjective and the objective. My above statements are, of course, subjective. Objectively, there are only correlations of pixel data between posts coming from somewhere in this universe – perhaps an amused chimp with a PC or an organized gas plasma that learned to lase a comm link to Earth.
It seems to me (from both a mentalistic and hardware viewpoint) that behind this objective/subjective bifurcation lies the pragmatics of a complex organism’s (surely, though, not limited to just a human’s) being able to thrust against entropy, in the same lifetime and universe, by use of both a bottom-up synthesis of concepts like sticks and stones and an empathetic, social analysis of complex behavioral concepts too complex to deal with in the bottom-up manner. I would further speculate that one studying the organization of the human or higher-animal brain ought to be able to find some physical bifurcation, in the structure of such organs, which correlates to this split in cognitive method.
But getting back to the posts here, I am particularly puzzled as to the stance taken by Lipochrome, a person who claims quite an involvement with computers and additionally has explored software neural nets, neurobiology and perhaps some sorts of so-called cognitive science. He would seem to defy my simplistic dichotomy of people, since, in spite of his computer and neurocomputational context, he appears to feel computers inherently have no chance for stealing advanced human roles and picks on such things as their hardware-substrate-level means of memory storage, and also the non-existent difference between the existence of something and its simulation to the full extent of the domain covered within a given argument.
In his position, he certainly recognizes that artificial systems which most nearly approach human-style information processing are layered up from the very inhuman organization of their bottom-level silicon substrate. Neural nets, though of course, much simplified from their biological correlates, provide their own level of memory in the weightings of their synapse analogues, leaving the substrate’s inflexible mode of memory irrelevant to arguments over their ability to (excuse the expression) ape humans.
Lipochrome also clearly draws a general, distinct line between existance of something and its simulation. In any given discussion, only a finite range of the complement of attributes of a concept is germane to such discussion. If what is “simulated” involves everything that is at stake in the discussion, that “simulation” also exists as the stereotypic entity upon which the discussion is centered (and stereotypes are what the brain and other neuro nets are all about, “PC” or not “PC”, so to speak). Thus, if the stand-in for a human happens to have (as a single individual or as a stereotype of a genus) only three toes per foot, and the argument is over humanoid complexity or “hummanness” of activity/“thoughts” in the brain/mind – one should not discount this stand-in in respect to its capacity for “human” behavior, in short, its “humanity”, on the basis of its lack of two toes per foot. Of course, one may note that, to simpler humans – and in a way, even to more complex ones – a human-looking face painted on a very intellectually dense robot is likely to strike more of a human chord than is such robot’s rendition of human intellectual feats. Then again, many human amputees have far fewer toes than six. Similarly, should the artifact employ, at its lowest level of organization, non-content-addressable memory, this is immaterial. At the lowest level of objective organization of the human brain, memory results from mere molecular bonding, as we apply modern chemical science, though such scientific modeling have little interest to so-called “humanists”.
No one (except Creationists) should argue against the fact that language processing in humans has involved a significant evolutionary change in the wetware of their brains from that of those ancestors they have in common with the great apes. As one with an engineering background who has experienced some capacity for non-verbal innovation (exercised also by artists) – from an introspective view, I highly object to any notion that “humanity” is predicated on an organism’s or mechanism’s ability to manipulate well-defined symbols; in fact, such manipulation can be exemplary of very poor engineering or art, and very commonly, of not very uplifting airheadedness. But some literary academics get really wound up on the humanness of symbol-tossing. (No doubt they would they claim that I have done a poor job here.)
One should note that today’s commerce in computers is free-market, except perhaps, in academia and government labs. Open commerce, of course, does not have as a goal the simulation or replacement of human beings. Thus the speed of evolution of more humanoid computer intellectual behavior is not as fast as it would be were we hell-bent on replacing ourselves. There have been a few academics who have announced that they were on such a direct artificial-replication pursuit. I believe they have found themselves rather limited in funding, which may be the reason they have of recent made no great announcements of progress. OTOH, they may have been better at symbol manipulation than at the subverbal talents necessary to their announced goal.
In comparing digitally (or analogically, if you must) implemented centralized information processors with the human brain, it seems to me one has to take into consideration two basic structural-implementation factors – 1) specific patterns of physical organization and 2) brute-force quantity of elements. What’s hanging onto this assemblage and is in its sensory and social/communication-link environment are also part of the formula for the ultimately comparable behavioral results. A human brain directly interacts, of course, with other organs of the body that contains it – in part, in such ways as to sustain its body within its particular sensory and effectory environment, and in that containing others of its kind with which it can communicate; and it is given a lifetime to informationally interact with these, although this may include some rote imprinting, as well as much more complex interaction. Such a brain has some very specialized parts in order to do this and also has somewhat modularly arranged cortices, generalized to varying extent, to adaptively modify its body’s behavior in these and less-apparently applied tasks – such as the instant one. (Occasionally one of the latter may get the species over a hump in its evolutionary contest with its environment, but not often. Like the moderator can say this individual “gushed” and squish its linguistic stain out of existence.)
It has been mentioned that advanced human-simulating / humanoid systems may require time-consuming “raising” within certain environments while attached to appropriate sensors. Of course, some time can be saved by canning some of the results of this on storage media and feeding it s