Another level beyond the human mind?

Ironically I couldn’t think of a good descriptive title for this thread.

I think it can be commonly agreed that humans are more intelligent than other animals by several orders of magnitude,* dogs are smart creatures but you wouldn’t sit down to discuss American foreign policy in the Middle-East or the finer points of evolution theory with one for example.

The human mind is capable of tasks and abilities that are qualitively different and more advanced than other animals, a roundabout way of getting to my question which is, is it likely/possible than there is a similar qualitative advance in capablities again beyond the human mind, not just bigger and faster but thought processes and capabilities we couldn’t even begin to grasp?

This is inspired by a short paragraph on this topic in a book I once read where the author stated this wasn’t possible because we as a species would be aware of gaps and missing capabilities in our thought processes and understanding and those gaps don’t exist. I’m not really sure what he meant by that which is why I’m throwing this out to the giant throbbing hive-mind of the Straightdope.

*if you don’t agree, thats fine, but this thread isn’t really about that, just accept its true for the purposes of this thread

I think it’s conceivable. The mantis shrimp has sixteen colour-receptive cones, where we have three. They see colours we can’t remotely imagine. If they evolved to be super intelligent, their intelligence would probably be pretty different from ours, again in ways we simply can’t imagine.

No the three colour system of receptors is objectively shown to be pretty good.

If the actual light present did require 16 coloured receptors, then every human would test positive to very terrible colour blindness… As it happens, the 3 colour receptors work quite well and we don’t have colour blindness.

(We are blind to UV and IR… but green is the peak of the power curve so we are using the best light from the sun anyway.)

Huh? :confused: You realise I didn’t say that our three receptors are anything other than pretty good? I am quite happy with the way a rainbow looks to me.

The OP asked if it could be possible that there could be intelligence that is qualitatively different from ours, and I think that is possible. I used the example of the mantis shrimp, because it is possible that the way a mantis shrimp perceives the world is so qualitatively different from the way we perceive it (or has the potential to perceive the world, if the light spectrum doesn’t allow anything much more exciting), that I think that qualitatively different intelligence could also be possible in ways we can’t really imagine.

Isilder, that’s not the point. With more receptors we could see color in a more fine grained fashion (i.e. individual color bands would be more distinct) and if we could see into the infrared or ultraviolet spectrum we would see information that isn’t available in just the visible spectrum as anyone who has looked at an IR or UV image can attest. gracer’s analogy is pretty apt, although to be fair we can conceive and even quantify what “other colords” (i.e. different wavelengths) would be whereas with cognition it is difficult to provide a rigorous conception of what that would be like. However, I like impossible problems so I’ll give it a shot.

Our crude primate-brains like to think in terms of discrete objects and “counting numbers” (positive integers). It took thousands of years after the beginning of civilization to actually codify the notion of nothing (zero) and even longer to develop techniques to work with numbers that are not reducible to simple fractions. Our current state of mathematics is fundamentally based on five constants (0, 1, e, i, π) but really, most people only understand and can explain the definition of the first two as the other three are irrational or imaginary. On a primitive level, we really see the world as containing discrete objects in quantities of one, two, and many. Realistically, however, most phenomena are non-uniformly and stochastically distributed, and are often continuous and often poorly bounded, even at fundamental levels where talking about discrete particles has some validity. Clouds, for instance, are not single objects in any rigorous definition, but we often talk in terms of a single “cloud” because that is what our mind observes. A higher form of cognition might instead see the world as a continuum with phenomena occurring in a quantitatively statistical “wavelike” fashion (i.e. interactions are governed by sinusoidal relationships). We can roughly conceive of this and our best mathematics for describing natural phenomena (calculus) is based on this, but it takes many years of experience and training for our primate brains to actually conceptualize the world in such terms even crudely, and nearly all of our practical systems for interacting with the world reduce them to discrete, well-bounded objects, even those that clearly aren’t.

We may think we’re all so smart for being able to build cities, count past ten, domesticate animals, and so forth, but even a cursory review of the history of natural science over even so short a period as the last century readily demonstrates that we are so almost completely clueless about the world that we are constantly surprised and amazed by the most obvious phenomena, and not for a lack of evidence but an inability to conceive of a larger universe not organized on processes that we experience in our everyday, terrestrial-bound experience. We have no reason those same revelations though our ignorance won’t continue ad infinitum, or at least until we grow past the point where our minds still function at their current primitive state.

Stranger

I’m sure there are but I don’t know how we would discover or conceive of them. Most animals are incapable of tool making, or learning by observing other animals, passing knowledge down through knowledge and culture, etc. A higher intellect could do things we can’t even fathom but I have no idea what they would be.

This is a fascinating question that I’ve often thought about. But I usually come about it in terms of how our thinking would compare to that of our distant, pre-sapiens ancestors.

Imagine that our ancestor could speak, but did not have the capacity to speak about the future. IOW, they could say “we hunt”, meaning “let’s all go hunting right now”, but they could not say “we will hunt tomorrow”. That represents a gap that would give us a serious advantage.

Now, how would you imagine some post-sapiens species to have a mental advantage over us? Maybe they would be able to think about X number of different things at the exact same time. Not switching back and forth real fast, but actually having, in effect, a multiprocessor brain.

Agree with Stranger.

I’ve often conceived of the OP’s issue like this: I suspect we are *just barely *smart enough to be conscious. Neanderthals were less smart, but still conscious. As are many current humans with mental handicaps. So we’re clearly not, on average, at the absolute minimum mental capacity for consciousness. But I’d argue that on average we’re close to that lower bound.

Given arguendo that we are just barely conscious, there’s a heck of a lot of room above that for alternate beings that can think faster & better. And hence qualitatively differently.

If nothing else, many of the thoughts we have operate by analogy. If we were able to cram vastly more knowledge in there, we’d have different, more subtle, and perhaps better analogies. A difference in kind would arise from a mere difference in quantity.

“It is deplorably anthropocentric to insist that reality be constrained by what the human mind can conceive!”

The quote is from this discussion of the idea of “cognitive closure”: “that the operations the human mind can carry out are incapable in principle of taking us to a proper appreciation of what consciousness is and how it works.”

Whether or not this is true of consciousness, there doesn’t seem to me to be anything inherently impossible or unlikely about the hypothesis that there are aspects of reality that the human mind is incapable of grasping, but that other, nonhuman minds could grasp.

Indeed. There is absolutely no reason to believe that our brain evolved with the capability of understand the universe. It evolved to understand the direct physical world we live in. In fact, when you think about it (ha!), it’s astonishing how much we are able to understand about the universe.

I think I have a slight different take on how you would measure the levels of human intelligence. If our true capacity for intelligence is reflected in our capabilities we really aren’t just a bunch of ambulatory bipeds with 4 lb “meat machines” stuck on top. Human mindedness and awareness exists as an integrated part of a ever increasing cloud of information that is growing exponentially over time. In the next few hundred or few thousand years we should be able to construct AI’s that are orders of magnitude quicker and more powerful than our own meat based capabilities. What they will bring to the table will be absorbed into and become part of human intelligence. I think this will true for all carbon based biological intelligences we may meet in the future. I don’t think we’re going way behind the curve as we will quickly absorb their knowledge to become part of our own.

Beyond this, just looking at the physical side of the equation. it’s interesting to consider what the evolutionary forces and pressures would be that would drive the development of biological super intelligence. We are at a place in our development as a species where being an outlier on the high intelligence end of the bell curve does not necessarily correlate with greater numbers of children. Why would our genome need us to be more intelligent than the status quo at this point in our evolutionary development?

It’s often been pointed out that we have a poor grasp of probabilities. We can work our way through a question of probabilities but we have no clear instinct for it.

So imagine a species that did have an instinct for it. They could just “feel” how likely a possibility is.

I wonder if it’s possible to frame this question in a way that is meaningful. First you have to decide what it means for one mind to be of a “higher order” than another. The OP seems to hint at a definition: that an order n+1 mind is inconceivable to an order n mind. That seems necessary to exclude things that are just obviously generalizable extensions of stuff we already grok: minds that do things like ours, just faster, or more parallel, or in ways that enable them to more quickly or natively understand things we’re already capable of understanding. But almost by definition, that seems to exclude us from being able to frame or recognize such an intelligence.

It doesn’t help that “intelligence” and “cognition” are ideas we just made up.

Speak for yourself! I’d rather discuss foreign policy with a golden retriever than with some people I know. :wink:

Closer to 3 lbs than 4.

I think one level possibly not too far up from our own is a level that allows for communication without audio or visual cues. Basicaly mind reading.

There already is one. It’s called, “I fart in your general direction!”

It’d be cool to theorize on what higher cognition would entail. But that comes back to the fact that I’m sure we can’t even conceptualize what higher cognition would be after a bit.

However I could see traits like being able to understand how to obtain a goal intuitively (vs how we have to use endless trial and error, the scientific method, etc) to do the same thing of goal oriented behavior. You just have a goal and you intuitively understand what you need to do to achieve it. Kind of like how people who lack in social skills have to make a concerted, intelligence driven effort to socialize effectively, but many non-autistic people just intuitively understand what they need to do in those situations.

Or the ability to think in parallel like John Mace said, the ability to think about infinite things simultaneously, then collapse them down into the correct path (this could be tied to above, you think of infinite ways to achieve your goal simultaneously and instantly, then collapse it down to the paths that will actually work). Perhaps if all realities exist simultaneous a higher cognition could just intuit what actions will convert reality A into reality B (reality B being the one where the goal is obtained and achieved), while we would have to conduct a ton of experiments and research on how reality works to clumsily move from A to B.

Maybe there is some way to just intuitively understand nature and science we can’t understand. We understand them piecemeal from linear hypothesis and experiments that build over time. Maybe a higher cognition could just comprehend them in whole by simple examination while we are like the four blind men trying to understand an elephant piece by piece.

There already is cognition beyond the level of the human mind. It’s called multiple human minds. We developed to the point where we can transfer complex abstract concepts from one mind to another, and preserve them past the lifespan of any one mind… and once you’ve got that, the sky’s the limit.

The old RuneQuest role playing gaming system had a thing, where a person, achieving a lot of power and magic, could acquire “Activated Will.” He would have total Free Will; complete and total volition and self-control.

(You know how we have to struggle to diet, or fight to give up tobacco? How about if that were simply as easy as flipping a light switch?)

This is a “higher order” of human intelligence…which we can pretty easily conceive of.

The other kind, the kind we can’t even conceive of…well, it’s kinda hard for us to talk about it, eh? I’m entirely willing to believe such a thing might exist. John Mace’s linguistic example is, I think, very illustrative. There might be new “cases” or “tenses” of grammar that we just don’t understand yet.

(Time travel might introduce us to such four-dimensional language tenses. “I will be to have been hunting.” Where’s Dan Streetmentioner when we really need him?)

There’s a passage in James Alan Gardner’s Radiant that discusses this in a way that I’ve always found memorable:

[QUOTE=James Alan Gardner]
…You don’t grasp the nature of superior intelligence. Suppose you create a brand-new intelligence test. Give it to average humans, and they’d finish in, say, an hour, probably with a number of mistakes. Give it to the most intelligent people in the Technocracy, and they might finish in half the time, with almost no mistakes. But if you approach the Balrog with your test in hand, it’ll say, ‘What took you so long? I’ve been waiting for you to show up since last Saturday. I got so bored, I’ve already finished.’ Then it will hand you a mistake-free copy of the test you just invented. The Balrog can foresee, hours or days or months in advance, exactly what questions a person like you would invent. […] Look, when we classify the Balrog as ‘beyond human intelligence,’ we don’t mean it’s faster or more accurate in mundane mental tasks. We mean it can do things humans can’t.
[/QUOTE]