is materialism incompatible with "me"?

SM it may be that “consciousness” requires various levels of memory. Certainly I would concur that human style consciousness is actually several levels of consciousness - multiple primary sensory consciousness, and a coordinating secondary level that integrates those inputs, primes for future expectations and coordinates responses to those inputs in time and space at various levels of delay in a manner at least somewhat subject to learning.

Of course the first point was that novel salient problem solving does not mandate that sort of organization nor that that sort of organization result in consciousness (it may be a necessary but insufficient condition). The second point was that if a system (even those ones, in some self-organizing fractalish self-similarity at different scales of analysis kind of way) was organized in that manner, and such an organization resulted in a consciousness that was concerned with problems not salient to us, or that operated on a different scale than us, we’d have no way of knowing it. We define consciousness as being like us and cannot even agree what that like us is. We only are convinced (infer) that other humans also have consciousness because they are otherwise so similar to us. This is an unsatisfactory approach.

HMHW, so your definition of intelligence is such that a coordinated team of individuals of different talents and knowledge sets is no more intelligent as a system than is any single member of that team, despite the fact they can solve a wider variety of problems and solve the same problems more quickly? Huh. Hard to go on from there. Please add whatever you think should be the definition of intelligence to my “definitions please” thread.

I haven’t weighed in on that thread because so far, I haven’t been able to actually find one I thought satisfying. I find problems with the definition you propose, such as the book with answers to the intelligence test thing; that doesn’t imply I have anything I like better.

As for the group intelligence, I simply am not so sure that the group could solve any problem that its individual members couldn’t, although I must admit the thought seems to suggest itself rather forcefully. But again, take the analogy to universal Turing machines: a group of them could conceivably compute something any single one of them would take infeasibly long for, but it couldn’t compute anything every member wouldn’t in principle be able to compute just as well. If now (problem-solving) thought is algorithmic in nature, and our minds are Turing complete, the same ought to hold. I’m not sure if I want to make mere speed the deciding factor for greater intelligence, though that would seem to be a coherent position to take.

On the other hand, it seems patently obvious to me that there are some people who probably never will get some things; but that might just be due to my own limited perspective. So all I feel I can really say is that I am conflicted.

So I’ll leave the group intelligence bit (and swarm intelligence and the concept that individual neurons processing ability is slight but they make up a conscious intelligent entity working together) alone and focus here, on that book example.

The book has no ability to deal with a novel problem, therefore it is unintelligent. OTOH a book that taught you algorithms for figuring out the answers on an “intelligence test” would indeed make you more intelligent in the narrow domain of taking that test and the combined system of you and the book are also superior in that narrow and meaningless domain to you without the book. The implication in either of those cases that the test result implied something about your (or the system of you and the book’s) intelligence in other, broader and more meaningful, more salient to our function in the world domains would be fallacious however. Of course even now it is a a matter of debate how much an intelligence test tells us. (Just pay attention to all those who post here who brag about their high IQs … they are almost always idiots. :))

You’re misreading me. I don’t doubt that unintelligent processes can combine to form intelligent ones, and hence, have no problems with neurons or transistors making up intelligent entities, or even, in principle, with swarm intelligence (though I’m less certain that it’s actually realized in nature in any but the most rudimentary forms); I’m just not certain that you can combine intelligent beings in such a way as to create a more intelligent entity. Again, if each member of the group can in principle compute everything computable (something very different from the situation in which the ‘group’ is made up of individual neurons or ants), yet the group still can’t, as a result, compute things that are non-computable by any Turing complete system (for instance, solve the halting problem or whatever), I can’t easily call the group more intelligent.

Yet some equally unintelligent process could use the information in the book, provided it’s written in a form that can be ‘understood’ by the process – i.e. if the book, for instance, contains controlling commands for some mechanism that controls a pen to tick off boxes on a piece of paper --, the combination of the two would receive perfect marks at an IQ test. Yet I still couldn’t bring myself to call this combination intelligent.

In a similar way, I could see myself mindlessly ticking off the boxes on the test, merely matching the book’s answers to the test’s questions. This would seem, to me, to barely be an intelligent action at all, most of my cognitive resources probably being devoted to getting the pen in the right place. (So one might wonder if certain surprisingly high scoring individuals not maybe had simply an equivalent of such a book with them…) So it doesn’t seem to be the case that ‘I + a book of information’ is always more intelligent. Then, of course, the question suggests itself: is this ever the case?

I think maybe this illustrates the difference in our stances best. To you, it seems, intelligence is to some degree determined by having the right algorithms to solve certain problems; to me, it seems more important to be able, in principle, just to execute these algorithms, and also, in principle, to come up with them.

That’s also why I’m having trouble separating consciousness from intelligence: any computer clearly is able to execute the algorithms a mind uses to solve any given problem; but its capacities for self-programming are generally very limited. The thought suggests itself that the self-referential and self-monitoring capacities of consciousness are what’s needed to become a truly general purpose intelligence, to come up with new algorithms to solve new problems. And again, if we’re nothing more than Turing complete, then this coming up with new algorithms is in itself nothing more than computable, and thus, we could differ in that regard only with respect to the time (and resources) necessary to accomplish the task. Nobody could come up with an algorithm anybody else couldn’t come up with, and there’s no algorithm in a book that you couldn’t find yourself, in principle, though it might take you longer than is feasible. All our book-writing and teaching of coming generations and so on is then just an effort to increase the time and memory we can devote to a given calculation.

HMHW did you miss the bit where I stated: “The book has no ability to deal with a novel problem, therefore it is unintelligent.”? Novel problems. No, the fact that it is a book does not make it a “novel” problem (to cut that bad pun off at the knees.) Knowledge, even algorithms alone, are not intelligent. The system must be able to access the knowledge and execute the algorithms and thereby solve new salient problems to be intelligent.

No the difference is that I have a definition for intelligence that I am applying and that is applicable across different sorts of systems and different domains, whereas you are considering intelligence to be something ineffable, but like porn you believe you know it when you see it. (Your expression of your understanding of my definition has nothing to do with what I have proposed, btw.)

As to your insistence that all Turing complete systems are at some level as intelligent as each other … do you consider Dig’s cited Turing complete Game of Life to be as intelligent as you are? If human intelligence is Turing complete then are all humans equally intelligent in all domains? It is a silly digressive point. Really.

I would strongly suggest that you attempt to come up with a workable definition of intelligence before attempting to discuss the subject further. I do not know how to say that nicer. (I have neither the knowledge of it nor the algorithm to create one.)

Heh, I actually thought this was one of the more fruitful discussions I had on the Dope lately; and at least for me it was, a shame if you feel different. But I think I was able to clarify a couple of rather important concepts, if only to myself, it seems.

If you’re that dead-set on having a clear-cut definition of intelligence, for the moment I’d be willing to submit to: an intelligent being is at least Turing complete, and in principle able to come up with and execute upon itself algorithms able to compute anything computable. So no, I wouldn’t consider Life (or my computer) to be intelligent, since it lacks the second criterion (which also was present in my last post). However, the obvious weak point here is the notion of ‘coming up’ with new algorithms, and it seems to me that there is quite a bit of discussion yet to be had on that subject; my gut feeling is that the self-referentiality inherent in essentially inventing and executing algorithms to oneself/on oneself may well turn out to be linked to the self-referentiality of consciousness.

From this definition, all intelligent beings are essentially equivalently intelligent, differing at best in the time and resources it takes them to arrive at a given result; thus, a ‘more’ intelligent being differs from a ‘less’ intelligent one, if we want to talk that way, at best in the way a faster computer differs from a slower one. Yet, both would have the same power to arrive at results; all results the one could arrive at, the other could, as well.

In that sense, then, one might call a group of people more intelligent than any single individual, I’m just not sure if that’s all that useful a distinction to make – like between runners, one might be faster than the other, but both are not really different in their ability to run. The far greater difference is between runners and those that can’t walk at all, for instance; and that’s the question I am trying to get at in attempting to get a grip on what is and what isn’t intelligent. Or, for another analogy, a red painted wall isn’t any ‘more’ red than a red dot on a white wall, there’s just a greater extent to its redness; but it is more red than, say, a yellow wall. In lumping together both senses of ‘more’ – a greater extent and further towards the low-energy end of the visible spectrum – one looses a dimension of distinction. You and I differ mainly on the question which meaning of ‘more’ we prefer.

The same is true for intelligence, as there would appear to be a way in which something could be actually more intelligent – as in, able to solve problems another intelligence couldn’t even solve in principle: there could be an intelligence capable of hypercomputation, able to, for instance, decide whether or not a given algorithm halts. I don’t think something like that is actually possible in reality (though I believe I’ve previously mentioned the Omega Point), but the concept is a valid one, and I’d say something like this would be more deserving of the designation ‘more intelligent’ than an ordinary Turing complete intelligence that’s merely somewhat faster in solving the same problems any Turing machine could solve.

And what happens if we press the Chinese room argument into service?

Let’s say I’m fluent in English and Chinese. Let’s say you’re fluent in English and French. Neither of us, working separately, can translate Chinese into French. Both of us, working together, can. What does that mean?

Hmm, I’m not sure, but there’s something to this – I could not, on my own, figure out a way to translate anything from Chinese, or anything into a language I don’t speak (but then again, neither could any given group). In other words, I can’t solve an arbitrary code without additional information – there doesn’t exist an algorithmic way to do so. The problem is underdetermined, in a similar sense equation systems may be. But does being supplied with the necessary additional information (either by gaining knowledge of the rules for the code/a translation of it, or being given another equation/the value of some variable) really constitute growing more intelligent?

Let’s say I have the equation 5x + 3y = 30 – I’m unable to give an exact solution for it, the best I could do would be something like y = (30 - 5x)/3. Now you come along and give me another equation, x = 3. Instantly, I can tell you that y = 5. Was that really due to the two of us being more intelligent than just little ol’ me with my equation? If not, is there any significant difference to the Chinese to French translation?

“Execute upon itself” is, to me, an odd inclusion. For computers, an often used term is “reflection” (ala Brian Smith’s 3-Lisp), while for humans, the term “self-aware” is often used. For the former (computers), it’s unclear to me that the inclusion does the work you wish it to; for the latter, you’ve opened up another definitional can of worms.

I think DSeid’s semi-insistence (and frustration) about definitions is very much warranted. It’s unfortunate that I don’t have time right now to devote to the discussion.

I’m not seeing this. The Game of Life is Turing complete. If the 3-Lisp programming language is also Turing complete (and it is), yet is also reflective, then Life can be made to have the property of reflection. So, theoretically anyway, Life does not lack the second criterion.

Well, that can of worms I seem to run into no matter where I try to take my thoughts (which is why I have a hunch about a deep interrelatedness of consciousness and intelligence), and it’s precisely why I’m having such great difficulties at finding a succinct and meaningful definition that captures the phenomenology of intelligence in a way I’d consider adequate.

It’s capable of intelligence, and capable of consciousness, as all Turing complete systems are (if the arguments I presented earlier to that effect hold, at least). But it lacks the capacity to come up with new algorithms to solve new problems (something for which reflection surely is necessary, but likely not sufficient), as do at present all real implementations of universal computers (we don’t have any true AI, after all). That’s a software problem, not a hardware one, in a manner of speaking; it’s nevertheless there. So I don’t consider the Game of Life to be intelligent in and of itself, but I do believe that by having it run the right program – by having it exhibit the right structure and organization – it could very well be.

Thanks for your detailed rely, SM. I feel I understand, perhaps, ~99% of you position and agree with, say, ~96% (in the broad sense). But, as I mentioned, with the devil here being in the micro-details, if you don’t mind indulging me, I’d like to learn just a little more about the 1% of your position that still eludes me.

The problem I’m having is that by giving me the answer “Copy”, you choose model #3, which actually is the logical choice to make for that answer, but also the one that I believe is illogical in the long run (because of what I still perceive as containing a paradox). I believe it’s a conflation of #2 and #3 problem, and the fault of understanding your position in this regard may very well lay with me. I think it may center on how we are treating VI in this experiment: I’m treating it as a binary “on” or “off” and I believe you are using it as a full range spectrum (not wrong either way, just a matter of how we each set the sensitivity meter).

The way I’ve set the VI meter, I would have no more VI in a virus than my identical twin, which is to say zero. On my scale, the only VI I have is within my original brain, in the context of the model I accept, #1. I suspect that the meter you’re using is more sensitive, with more gradations, such that you have some VI in a friend, a little more in a sibling and more still in an identical twin. Am I correct in this assumption? If so, then it follows that you would assign an even greater VI to your copy. And, I would assign the same amount to my copy, using the same scale. This doesn’t bring us together on which model we accept (you still assign the same VI to “you + time” as you do to your copy, and I assign more—the most, in fact—to “me + time”), but at least I can more fully understand your position and we can really be on the same page.

If the above is correct, then, you would really fall in line with model #2, and choosing “Copy” in the experiment, in this context, is no longer an illogical answer. Therefore you’ve chosen a logical answer using a logical model. If this is the case, it’s not that you’ve made a mistake so much as my not clarifying the difference between models #2 and #3 better.

You’re statement: “I’m not sure where the “shared consciousness” strawman came from”, leads me to believe I need to clarify my position (#1) better, too. I am saying that #1 would involve a shared consciousness if it branched, which is the primary reason I claim that it can’t (Bell’s Theorem notwithstanding – a good debate for another day, perhaps). This is the reason I claim that model #3 is illogical, too (but, hopefully we’ve got you firmly in #2, now). Think of a chain of video monitors (with some type of processor) popping in and out of existence each instant, each attached to a single video-in/video-out wire, and that is the most simplistic way to envision perception of sight in #1. Attach an additional monitor to any instant or instances of original monitors and you will have a twin simulcast going to the same processor. If you take away the wire and still see the simulcast in the original processor, that represents model #3—the action at a distance that I say should not apply here. Hence my stating that you would literally be being seeing through two sets of eyes if this could happen.

I realize that you reject the idea of a hardwired consciousness, but I’m asking you to imagine what it would be like, if it did exist, and how it could be reconciled with a materialist worldview. I’ll think of a good way to conceptualize my understanding of it in simplistic fashion and post later.

No matter how long I computed I could not create what Einstein did. You could clone a thousand of me and have us working side by side and I wouldn’t be able to create it. Einstein, in at least that visuospatial domain, was much intelligent than I could ever hope to be. Of course he and I working together could not come up with a Shakespearian level play. And the three of us working together could never learn how to create that which Thelonius Monk did. Not in a dozen lifetimes. Now maybe we could “in principle” but Einstein, Shakespeare, Monk, each were of more than average intelligence in different domains. And that’s staying within the same species. No human could come near, or even comprehend, the spatial processing of a whale, who monitors and solves social and predator/prey problems, involving individuals within cubic miles of space. But who couldn’t tell me what two plus two equals no matter how long we gave it or how many of them we put to the task. Intelligences are domain(s) specific.

Of note, you have hinted at the importance of creativity, as part of intelligence.

That is an interesting point that I’d like to hear you expand upon. I believe creativity is certainly something that has a lot to do with intelligence. I am not sure if it is a requirement to be considered intelligent, or a tool used by some forms of intelligence. I am also not sure that it requires sentience. I can (in theory … I do not actually have either the math nor the programming chops to do this) create a model of creativity which is formed by representing concepts in some form of a metric that can be “visualized” as n-dimensional objects (by way of a neural net) and create analogies by performing geometric transformations of those objects upon other domains of data sets and finding previously unrecognized surprising fits that lead to the predictions of new data points that also fit the pattern. Testing those hypothesis then lead to modifications of the transformed object. Such a hypothetical program could generate creative hypotheses by way of analogizing, test them, and modify them, all without self awareness. The fact that the likes of Einstein did a similar process by consciously imagining himself on a beam of light, or others by dreaming of snakes biting their own tails (the possibly fabled story of the realization of the structure of the benzene ring), does not mean that such conscious visual imagery is the only means to reach that end.

So how do creativity and intelligence mesh in terms that do not rely on anthrocentric fuzziness?

Perhaps I just have greater faith in your intellectual capabilities, then. :slight_smile: Give yourself a few years, maybe a few hundred, or thousand, a million – you’d probably be surprised by what you’re capable of. If what I say is true, it is indeed the case that all that limits us is time and resources. (Having said that, it might of course be the case that you’d need greater resources than the observable universe is able to provide to replicate Einstein’s feats – but, at a guess, I’d say you’d manage.) Another sticking point, as The Other Waldo Pepper pointed out, would however be whether or not there actually is an algorithmic way to the solution of a problem, which depends on the information available to you – in some cases, as in the translation of an arbitrary code or the (unique) solution of an underdetermined equation system, there may not be a way to get there from here.

Well, in the somewhat tentative picture I’ve come to right now, you would indeed need some measure of creativity to be considered intelligent – in the form of the ability to come up with new algorithms (which could quite conceivably be a purely mechanical process of the sort you outline). There also could lie the only difference between intelligences – that there might be some algorithms you never could come up with, no matter what. But, if the process of coming up with algorithms is itself computable (algorithmic), which it would seem to have to be, then I don’t see how that could be the case – though, without knowing how this process is supposed to work, there’s probably no way to say for sure.

So perhaps I should lean myself yet further out of the window and attempt to outline such a process. Any given finite-length algorithm can be generated, if somewhat inefficiently, by just writing down all syntactically allowed strings of a given length. All that’s needed, then, would be some sort of ‘virtual machine’ on which to execute the algorithm to see whether or not it achieves the desired result. If there is a finite length algorithm that solves a given problem, eventually it would be found in this manner. This process has one problem right off the bat – it’s impossible to tell whether or not it will halt for a given problem; if for some problem, there simply isn’t an algorithmic way to find a solution, it’d grind out algorithms forever and ever. This problem is nested – any algorithm that is being ‘tried out’ itself might not halt, so the checking process might never conclude, as well. As a workaround, one could just ‘cap’ the process, at the risk of not finding a terminating algorithm appropriate to solve the problem – in fact, in all realistic implementations (such as are possible for such a ludicrously inefficient scheme), there would exist some natural caps in finite resources, or even durability of components.

Then, however, we would have a process which conceivably could ‘miss’ some algorithm another intelligence might find – however, that restriction is merely imbued by the finiteness (and, well, comparative microscopicness, truth be told) of what we’re used to deal with. One could easily augment the solution-finder in such a way that, once it has run its course without finding a solution, it increases the caps and tries again – then, given enough time, it would always find a solution as long as one existed.

An intelligence of such a structure, then, ought to be able to solve any solvable problem if given enough time; and time requirements may go down considerably once you start thinking about more practical implementations for the solution-finder (one major improvement that suggest itself immediately would be that, on successive runs, it doesn’t start again grinding out algorithms from zero, but from where it left off before).

However, this whole thing would appear to be do-able without any form of consciousness at all, it seems. But I’ve kinda glossed over a relatively important factor of the process: the ‘virtual machine’ that’s used to check an algorithm for viability; and I have left completely unexplored how an algorithm goes from being ‘cooked up’ in the solution-finder to being actually implemented and executed.

I can’t hope to give a thorough account of everything that’s involved here, but, looking at this more closely, it already begins to smell faintly of some strange loopiness: the content of the intelligence determines the content of the virtual machine; the content of the virtual machine determines the content of the intelligence; and both are in some way part of the same computational structure/architecture. The virtual machine is a representation of mental processes to themselves, and the implementation of new algorithms is an act of intelligence upon itself, for which it would seem to need some representation of itself within itself. It’s kinda like the zombie thing: second-order states seem to inescapably arise.

This does not begin to solve, or even acknowledge, most of the open problems that exist within the discussion of consciousness and intelligence (I neither have the resources nor the time to go into it in any appreciable detail); but, as a very coarse-grained model of how intelligence might ‘work’, I think it’s the best I could give at the present moment.

But it might be the case that I’ve overshot some – that the intelligence I described is in fact something more powerful than that of human beings. It might be the case that we, in fact, have fixed caps in our solution-finders, and that those differ between persons, such that there might indeed be solutions one person might come up with another couldn’t ever hope to. If that’s the case, then, indeed, groups may be more intelligent than individuals, and books enhance an individual’s intelligence.

But, what’s to stop us from setting our solution-finder onto the task of finding a more efficient solution-finder? We are, if what I’m saying is not just so much bullshit, Turing machines after all, so, since everything in this chain is nicely computable, and the whole thing stays finite, and any finite nesting of computable processes should itself be computable, that ought to be easily possible, if perhaps resource-intensive.

And, I’m confident that improvisational jazz pianist and composer, Thelonius Monk, would be unable to switch gears mid-career and attain the same heights as Franz Liszt did in romantic period classical music composition and piano virtuosity; and visa-versa—despite their being, you must admit, in very similar domains. So, what does this tell us? Not very much, in my opinion. It still comes down to the standard question of nature v. nuture and the malleability of CNS neurons. There are simply too many variables involved to tell much of anything. If our musicians were switched to the other’s exact environment at 14 years of age, would they then be capable of attaining the same level of achievement in the other’s music genre? Age 10, 6, 3…just out of the womb? My guess is that they both inherited brains with at least a base level circuitry necessary to become musical prodigies (overlap with some type of mathematical proclivity, no doubt, since there appears to be a definitive music-math link–interesting, since we think of music as art and math as hard science).

Are all, or a large percentage of the human population born with brains capable of becoming Thelonius Monk/ Franz Liszt level musicians (but, that they must live in very specific environments), or is it a very small percentage (but they may live in less specific environments)?

We can even split the creativity of music into sub-“domains”. Liszt was considered an exceptional composer, but an even greater piano virtuoso (peers considered him best in the world, perhaps of all time). What if he spent just as much time studying music, but never had a piano, or, in fact, never touched any instrument growing up? An argument could certainly be made that he would have become an even greater composer, a Beethoven level composer, since all the hours he spent per day doing finger exercises could be spent on theory and composition. Could he then, at say, age 20, be given a piano and 5 years time to devote solely to playing, become the virtuoso he did become? Doubtful; my guess is that he would have lost the window of opportunity in which the motor pathways to the intrinsic musculature of his fingers were fresh and malleable enough to do the trick.

So, what ultimately sets the Amadeus Mozarts so far above and beyond the Antonio Salieri’s of the world? Being born with slightly different neuron circuitry?; having a more selfishly ambitious father who’ll throw you headlong into the study of music as a toddler*?; having such a passion for music, you’re capable of narrow focusing your attention like a laser beam; or simply something as mundane as who puts in the 10,000 hours of study to become an expert?

I don’t even think Einstein could create what Einstein did. Could this fellow have given birth to the idea of the curvature of spacetime? Admittedly, a creative genius while debating quantum quandaries with Dr. Bohr, but hardly a theoretician at the top of his game. How about this guy? He’s more like your brilliant, though eccentric Uncle in Jersey (Princeton, that is), who you bring out to impress your friends with his prowess solving mind puzzles, but who you send away before he does something to embarrass you. No, this is the fellow who brought the Special and General Theories of Relativity to the world. What if his boss at the patent office said, “Herr Einstein, quit daydreaming about riding on beams of light and balance these damn books in your free time.” What if he just didn’t have time to think about such things till many years later?

Perhaps, if you or Thelonius Monk were replaced at the teat of Frau Einstein or Frau Beethoven, either of you could have brought physics to a new echelon of enlightenment, or given the world the single greatest—IMHO, of course—artistic achievement known to mankind (DSeid’s or Monk’s 9th Symphony)…or, perhaps you’d end up flipping wienerschnitzels at McDeutschland’s. :wink:

With varying degrees of accuracy I think most of us can certainly comprehend and even imagine the consciousness of any other type of human (different sex, culture, IQ, even various brain or psychological pathologies)—we can reference experiences in our own conscious time-line, perhaps combine and extrapolate a few of them, then formulate a working model.

And, although you may not agree with this stratification in real life, I believe for the benefit of this particular debate, we should separate consciousness into self awareness and the processing of sensory input (and even disregard memory integration for the time being).

Can you imagine what it’s like to be another species?

As for the general feeling and degree of awareness in lower life forms, I believe it may be likened to the induction of general anesthesia, without the analgesic or neuromuscular blocking components, just the hypnotic. Inhale a volatile anesthetic and count backwards from 100. As you continue to count, you’ll pass through diminishing levels of consciousness corresponding to ever less conscious species (climbing down natures ladder, scala naturae), sort of like ontogeny recapitulating phylogeny…of the mind. At the point of complete loss of consciousness is the point of the first species to have developed consciousness, whatever that is—let’s say it’s a tit mouse, or a slippery dick, just for the sake of rudeness. :stuck_out_tongue:

Conversely, what may we evolve into? Imagine a time when you were hyper-aware—perhaps during an automobile crash, when everything seemed to have gone in slow motion, when your adrenalin put the processing of sensory data into overdrive. Extrapolate that feeling, and perhaps that will be the mind of the future.

Then you have to consider what it would be like to actually experience the processing of input from senses that we humans simply don’t have. With a little imagination, even this can be done with some degree of accuracy, I believe. It’s easy to imagine seeing like a bee, just extend your visual spectrum to include ultra-violate wavelengths and view through a multi-faceted prism. It may be more difficult to imagine seeing like a mantis shrimp , but not impossible. You can even have a good go with echolocation in bats or whales, infrared detection in pit vipers or any other unique terrestrial born sensory perception. The fact that all species on earth (most likely) evolved from a single common ancestor, gives some assurance that all evolved senses have some commonality that we can either experience directly, or imagine indirectly alone or in combination. Perhaps echolocation can be imagined as a cross between vision and hearing and a little proprioception thrown in. The real challenge is to imagine what unique sensory apparati and processing abilities have evolved in extra-terrestrial higher life forms that are probably more different from you than you are from a giant sequoia tree.

No, as Nagel famously pointed out, we cannot really imagine what it would be like to be a bat. (The actual essay.) But that subjective sense of consciousness and awareness, of being unable to explain to one congenitally blind what “red” is, is not the point I was after. I was merely pointing out that intelligences differ. Nature or nurture to getting to those differences matters not in this regard. To me discussing intelligence as some generic entity without regard of the domain(s) under consideration is a useless exercise.

HMHW I appreciate your confidence in my untapped resources :slight_smile: I OTOH recognize the limits of my intellectual capacity, and its strengths and weakness in various domains, and am quite pleased that I have been able to accomplish what I have with what I’ve got!

I’m little more than a novice with regard to computer hardware and even less with respect to software, but I find it easier (and maybe you will, too) to conceptualize the equivalency relationship between human brains and algorithm-crunching computers (or even Turing machines), not as increasing intelligence being proportionate to system optimization, but rather as decreasing intelligence being proportionate to increasing “bugs” in the system, or increasing intelligence being inversely proportionate to “bugs” in the system, with an infrequent but very important twist (i.e. beneficial mutation).

Say, at this point in our evolution, the human brain has reached Celeron 430 potential. The more intelligent of our species are Celeron 430’s right out of the box, relatively problem-free (these are the Leonard Bernsteins, JRR Tolkiens and Michael Faradays of the world). Those of our species more cognitively and creatively impaired are the Celeron 430’s with more software coding errors, hardware glitches and viruses (these are your Anne Rices, John Tesh’s and…oh, never mind, you know who they are). Every so often, however, certain of these bugs in the system creates a pathway to a higher plane of intelligence giving us the Shakespeares , Beethovens and Einsteins of the world. One can only hope, when this occurs, that these new and improved alleles (i.e. beneficial software coding errors) gain frequency and distribution to the extent we may evolve into Celeron 440’s.

Yes, I’m familiar with the paper, but I respectfully disagree with the premise. (I am, after all, Batman).

I must admit to some resistance against my own conclusion in this regard – not specifically related to your cognitive abilities, mind, but simply because of what introspection tells me about my own, which does not really serve to build up an ego able to claim that I could’ve come up with general relativity or any number of stunningly brilliant achievements of the human mind myself. Yet, the converse notion – that there should be some problems in principle solvable only by certain minds, or, if the rough outline I produced is of any relevance, some algorithms that only certain intelligences can come up with – seems almost as abhorrent, and moreover, artificial and contrived to me.

But I think I’ll have to bite the bullet and concede to you that this may well be the way things are – after all, whatever mechanisms we use to arrive at solutions to any given problem are most certainly naturally derived, and nature doesn’t really care whether or not I like its solutions; and with no design or intent, there is no reason to expecting it to come up with anything one might consider optimal.

Tibbytoes, there may be something to your notion of ‘bugs’ hampering the abilities of the system, and to certain systems being more prone to them than others; but the generalisation to a hereditary trait may be a bit hasty, seeing how there are many, many levels of expression between genes and mental processes. But an interesting notion is that a given bug may indeed prohibit certain results from being correct, ever – like those floating point operation errors that were so notorious. I don’t really know how that would impact performance on a reflectively self-programming machine – whether one could, for example, write an algorithm that gives correct values in the face of this hardware error, i.e. that is able to compensate it – but it is an interesting direction in which to take the question about what prohibits us from being truly ‘universal’ intelligences.

Hi, TibbyToes - apologies for the delay. I’ve been in Tokyo and did try to reply but the keyboard refused to behave itself.

Like I said, I’m not sure I understand the #2 model myself. It seems that it means that the 12:00 me does not care about the wellbeing, financial or otherwise, of either 12:10 candidate me. I therefore struggle to see how #2 and #3 could be conflated, since #3 involves caring about the wellbeing of future meme-children while #2 doesn’t.

Hmm, not really. I guess I like to see my ideas promulgated, such that if my friends, siblings or genetic twin “share my worldview” I might be said to see something of myself in them. But that is only a very pale shadow of the ‘me’ I’m talking about here - so small as to be irrelevant really. You see, even my genetic twin doesn’t share my memories. Throughout his life, he saw things from at the very least a different angle, found some things more memorable than others, and wasn’t even in the room when some highly significant memories were formed. I guess his memories would have enough of a general overlap with mine for me to consider some small Vested Interest “in him”, but this is negligible compared to the VI I have in my Copy.

Again, model #3 (VI in all, so if only one is to survive I’ll choose the richest) is my strong preference. Insofar as I understand #2 (it doesn’t matter how rich the surviving ‘me’ is), I reject it.

I don’t know why you say this.

The copy is its own person, which is just as convinced that it is you as you are.

I just don’t see the need for it in an Ockham’s Razor sense. If it was true, then swapping over a few atoms for identical atoms would surely cause some kind of ‘fading’ of the original consciousness. I know that evolution doesn’t happen to do this for deep CNS neurons, but I would ask you to imagine if this was the case. The rest of the brain and body regenerates continually without any diminution of your feeling of ‘you’. If we replaced, say, 1% of deep CNS atoms with identical atoms every day, would you think ‘you’ would gradually fade away somehow?

Ah, the ol’ “my Japanese keyboard didn’t work” dodge, eh?:wink:

After racking my brain over the course of many sleepless nights, trying to pinpoint the most accurate and literal graphic representation of my philosophy of the mind model, my eureka moment came when I found this.

Alright, perhaps not quite a “literal” representation (and, no more far-fetched than Kekule’s silly dancing benzene ring snake vision, IMHO), but, actually, a hamster in a habitrailwill suffice well enough to represent the salient features of this model (#1)—and not in a cortical homunculus sort of way, either.

The moment you become conscious, your brain lays down the first of very many habitrail sections, and pops out a baby hamster, named Hoppy, into it. Hoppy begins his (i.e. your) lifelong journey of consciousness through the habitrail, created and added, in interlocking fashion, section after never-ending section, till you both die. Hoppy is always in the most current section, never quite falling into the abyss, before the next section locks into place. While, he has just enough room to turn his head and see his real (“hard-wired”) past, he can’t actually turn around and revisit it. Looking forward, Hoppy can perceive and anticipate a real (“hardwired”) future ahead of him, and, though many branches of habitrail may present before him, he can only commit to one—only one hamster per closed system trail…unless you’re someone like Eve, she had three hamsters in her cage :).

I believe there is more than one level, or echelon, involved with consciousness and the higher order representation theories (HOR)* seem most logical to me. In this scenario, each section of habitrail represents a level-1 echelon of consciousness: mental states—thoughts and perceptions (along with the requisite accumulation of memories). These sections may be re-created anywhere, anytime, as generic assemblies of fundamental particles corresponding to any instant of a conscious beings metal state—perfectly in alignment with the materialist worldview.

Acting upon these level-1 mental states is the supervening higher order of perception (HOP), or Hoppy, our hamster. While identical hamsters may certainly be created down to the exact same assemblage of fundamental particles and are valid hamsters in and of themselves, only Hoppy was created (as a process) and hardwired (or, should we say, hard-habitrailed) from a particular brain, composed of particular particles in space-time—making him (i.e. you) unique in the universe, though not in conflict with the materialist worldview.

Now, carrying this hamster metaphor further, to represent what I believe to be your accepted model (#2) entails placing a hamster in each self-contained section of habitrail. Each instant, your brain lays down a new section of trail, but the section is sealed on both ends. As your brain creates the next section, it puts a new hamster in it, leaving all previous hamsters to wither and die. Just as this new hamster walks with joyful anticipation toward the next section—bam—he hits the sealed see-though plastic end and begins to suffocate, tormented by the sight of a new hamster imposter in what he perceived to be his future (his fate no better than had he traveled via a Star Trek transporter with a broken arrival pod). This model fails due to cruelty…and, because there are more hamsters than necessary, it fails the cut of occams razor. (OK, under a thinly guised appeal to emotion, there is some valid logic beneath).

Recap: I believe that, while memories and mental states (i.e. level-1 consciousness) may be recreated or copied ad infinitum, when and if they are, a new and unique higher order of consciousness is created at the same time and from that point on, must follow its own future. There is no measurable difference between an original and it’s copy, they are both valid individuals. The copy has a real and unique future and an imagined past (take him back in time and he ceases to exist prior to the point of duplication); the original has a real future and a real past.

OK, I’ll treat you to another fun analogy to help explain my thoughts on this (and, by the way, where is our friend, Mangetout? If memory serves me correctly, he used to be thoroughly enthralled by my analogies…or maybe I’m thinking of someone else :p). As mentioned before, I liken consciousness in general to being a unique process (“current”) that’s switched on in ones third trimester, from a particular brains circuitry (CNS neurons in perpetual physical contact with each other, passing information…cascading, recruiting and whatnot—which, of course, at a deeper level, corresponds to a particular arrangement of atoms). Once switched on, this current is a unique entity that doesn’t switch off completely until death, although it may exist in different states and may even be damaged…or enhanced, during its existence. I don’t believe it can be divided with both parts remaining viable, nor replicated and remain completely identical. If using HOR terminology, this current would be the Higher Order Perception (or Thinking, if you prefer that version).

Sallying forth with the analogy: imagine 100 people with arms stretched above (i.e. the atomic configuration of the relevant neuronal circuitry), supporting a large slab of Jello, one with just enough tensile strength to remain intact under normal wear and tear (i.e. HO consciousness). As long as this slab remains aloft, wiggling and relatively intact, consciousness continues, unabated. So, we have 1000 fingers supporting this material thing (Jello) engaged in a higher order process (wiggling). Certainly, you may replace one or a few support people at a time, with little to no effect on the Jello, or its wiggle. In fact, it’s conceivable that you could eventually replace each and every person, keeping the slab and wiggle viable. Try to replace (or remove) too many people at one time, however, and the slab will be damaged in some deleterious form or fashion. Try to replace all at once, for example, and it will fall and stop wiggling, even if a new crew of 100 attempts to pick it up immediately—it’s code [del]strawberry[/del] red for the Jello. Remove key supporters and chunks of Jello begin falling to the ground; lose too many over time and even the wiggle begins to falter, resulting in the slab sometimes forgetting what type of Jello it really is (e.g. loss of memory and personal identity from senile dementia, something I’m familiar with in my immediate family). Somewhat paradoxically, once in a great while, losing a significant amount of Jello, shortly after its made, may result in a new and improved wiggle (like a beneficial harmonic or overtone wiggle), ultimately more functional than the original (something else I’m familiar with in my immediate family, interestingly enough).

So, there you have it, my philosophy of the mind can be fully explained with hamsters and Jello. I can’t imagine why you find it so hard to take seriously. :dubious:
*I’m not necessarily an advocate of the HOP model as opposed to the HOT or generic HOR, but “Hoppy” seems a better name for a hamster than “Hotty” or “Whore”.

TibbyToes, listen, I think we understand each other’s viewpoint well enough without these confusing analogies. If there’s something you’d like to ask me which is a direct question[, not a request to comment on yet another analogy, I’ll do my best. Otherwise, we’re not getting any further here, so I’ll politely bow out.