Are humanoid intelligent extraterrestrials becoming more likely?

You don’t even need aliens to make that point - giant pandas have a “thumb” that is a extended wrist bone.

Slow Moving Vehicle: Excellent example. In the same vane (pun intended) you have the flying squirrels and Draco volans, the flying lizard, who have evolved wing-substitutes.

See what I said about split-brain - this is not the same as having two minds, and like I said, the bandwidth isn’t the issue, the signal processing time is.

Plus, like I said, a Group Mind that breaks up when one of us leaves the room isn’t really a Group Mind. IMO.

Unihemispheric slow wave sleep appears to occur in cetaceans and birds. In these animals, one entire segment of the brain has ‘left the room’, at least temporarily.

If group minds are possible at all (in the sense of a shared consciousness between two or more individuals) maybe the best analogy for the experience of a single member of such a group (when left out of contact with the others) would be the conscious portion of a creature in USWS. Such a creature is capable of functioning, but as a lower competency than when the creature is fully awake.

Why?

What about a mind that goes to sleep every night?

You’re making a priori declarations about reality, but you aren’t giving any real reasons.

Because (for me) a Mind is a continuous unitary entity, regardless of the disposition of its components.

The Mind never shuts off - there are two parts to the Mind, after all. The subconscious has continuity.

I’m stating what things are like according to the model of Mind I find acceptable. You’re welcome to propose a different model of Mind - so you’re happy calling a discontinuous thing a Mind, that’s fine, but I think it’s a different class of thing.

I’m not saying such a thing wouldn’t serve its function, that such a gestalt entity wouldn’t be successful. I just balk at calling it a (unitary) Mind.

It seems likely to me that we will need to create an entire science of mind, once we eventually get enough data to do so; at the moment we have neurology, research into artificial intelligence and the philosophy of consciousness, and these three disciplines barely intersect.

At some point in the middle-term future we will probably need a classification system for mind-type, one which includes gestalt minds as well as ones with and without self-awareness and self-directed expansion. Mind studies (Stanislaw Lem called it toposophy, the shape of wisdom) will probably be complex enough to engage our interest for the foreseeable (and much of the unforseeable) future.

However it is quite possible that the range of mind-types which can evolve naturally on a planet (or elsewhere in the universe) is restricted compared to the range of mind-types which can be constructed artificially. So far I haven’t really been able to imagine how a group mind system could evolve naturally, although there are a few semi-plausible possibilities.

You’re getting annoyed at people not following your personal, more specific definition of a word?

Here, I’ll solve your problem for you. Always spell your idea of a mind mynd.
So when someone writes about “hive minds” they just mean the broad, vague definition of mind you might find in a dictionary or general article.
But you can declare that hive mynds are a contradiction in terms!

BTW Great post eburacum45

The reality is that human intelligence is a hive mind. A child must learn stuff, so they must acquire knowledge from their elders. It appears that without the complex social and linguistic interaction with others, humans are barely any more natively intelligent than most other mammals. Higher-order reasoning and learning reside entirely within the language center of the brain (e.g., without talking to others, you would probably never figure out on your own that the Earth is a big rock orbiting a really, really big nuclear fusion reactor).

Vinge’s Tines concept is a bit difficult to support, though. It is not entirely obvious that intelligence is so heavily dependent on a transitory threshold of processing power such that dividing the components would render the individuals possum-stupid. As it stands, it looks like creatures learn stuff in an enduring way, even to the extent that taking some of it away may not always be a fully unrecoverable loss.

I think I have the book you’re referring to. The picture was near the end, and the dinosaur was much less human looking than the links. I know exactly where it is - in the house we moved out of 17 years ago. Now I have no clue.

BTW, perhaps someone could define humanoid? The dinosaurs in the linked pictures are, but I don’t know if the one you remember would be considered humanoid or not.

Good point. Lewis Thomas* made a similar point in one of his books on life sciences: a lone human falls short of his or her potential, just as a single termite does. We’re a social species, and huge amounts of our functioning involves interactions with other humans.

There are also two other models for distributed minds: the two hemispheres of the human brain – some people have even had the hemispheres separated, but they still possess only one mind.

And: the “layers” of the human brain, as Carl Sagan noted in “The Dragons of Eden.” We have the deepest, “reptilian” core, overlain with the mammalian level, and then the huge human cerebral layer is wrapped over that. We solve algebra problems with one, gossip with the baker with another, and fly into road-rage with yet another.

The idea that a mind cannot be composed of separable contributing structures seems contradicted by exhibited fact.

ETA: *Oops, it might have been Harold Morowitz. Damn, been too long!

The word in dispute isn’t “Mind”. it’s “a”.

Well, if that’s the case, then the term is meaningless.

Not really, no.
We wouldn’t disagree about whether there are 1 or 2 coins on the table. And if we agreed on the definition of mind, then obviously we’d have no problem counting them.

The problem here is that you have come up with your own, more specific definition of mind than the way the term is normally used, then got annoyed that people are using the term “hive minds”, as, according to your definition, it’s a misnomer.

It doesn’t make any sense to be annoyed by this though, as it’s a problem of your own making.

No, I haven’t.

According to my definition of the word “a”, yes.

Who said I’m annoyed? I hate the use of the word “hive” in the term, but I’m not annoyed by the idea of a group mind in SF - I just think it’s a bad name for an entity which is composed of several individual minds which are themselves independent entities (unless the entity goes catatonic when separated from its other group units, which has never been the case for “hive minds” I’ve encountered in SF)

There might be several different types of collective mind to consider here;

The Hive Mind, which is a kind of mind that is only fully self-aware or sophont when the individual units are together. The individual units might be as sentient as an ant or a bee (hive species we are familiar with), or to use Olaf Stapledon’s ideas from the 1930s, as lowly as a micro-organism or as sentient as a bird in a flock. In these cases the individual unit might be capable of living outside the hive, but is considerably handicapped and may not be able of reproducing.

The Cultural Mind- a grouping of individual beings who are each individually self-aware, conscious and sophont, but who can communicate in some or many sophisticated fashion to become a gestalt which is greater than the sum of the parts. As others have mentioned, our own human culture produces this kind of cultural mind, which builds on individual efforts to produce something no single human could create - and which has a long, and increasingly sophisticated memory.

The Group Mind, a group of fully sophont entities which use some sort of direct brain-to-brain communication to become a different conscious being. This communication channel might evolve naturally on some worlds, although I am a bit doubtful if such a thing would be possible. But I’m fairly sure that a sufficiently advanced civilisation could develop direct brain-to-brain communication in due course.

The number of connections required to fully connect x number of brains increases geometrically as x increases, so a very large group mind would either have minimal connections between any two particular individuals, or would be completely immobilised by the sheer mass of connectivity. These two cases would result in quite different mind-types.

:smack:

:smack: is right if you can’t tell the difference between the word “mind” and the word “a”:rolleyes:

No the headsmack is justified because you are honestly trying to suggest that where we differ is in the definition of the word “a”.

Firstly because presumably, you and I would count the same number of objects in most instances: 3 apples, 2 cups, 4 beetles etc. Clearly if we are differing about whether a hive mind is one mind then obviously it is in how we define mind that there is a difference.

And secondly, even if it were true that we differed on the definition of the word “a”, then that still doesn’t work as a retort to what I said. I was saying that your apparent issue with the concept of a hive mind was a problem of your own making because you were inventing your own definition of mind, then objecting to other people’s use of the broader, established definition of the word.
If instead you’re redefining the word “a” then…yeah, what I said before still applies.

There is a teratological spectrum of conjoined twins, with various degrees of merged brains. At one end of the spectrum, you have a baby with one brain but two faces. At the other you have a single walking body…with two distinct heads, each with its own brain.

At some point along that spectrum, you would have two functioning brains, still sufficiently merged as to operate as only one mind.

These things happen in our own reality. We don’t need SF to demonstrate the possibility of distributed cognition.