"Intelligence does not necessarily require bulk, Captain."

But does it?

In John Scalzi’s novel, Old Man’s War, one of the enemies the human troops have to face is a race of Lilliputian-sized humanoids who are, apparently, as intelligent as humans. Is that possible, given what we know of neurology, biochemistry, and atomic theory? Can the necessary number of neurons for a functioning sentient brain be scaled down that far, into a braincase the size of a hazelnut, and still work?

I posted this in CS because it’s a science-fictiony question, but it might belong in GD or GQ; Mods, move if you think appropriate.

I doubt it.

I am not qualified to answer, but I also doubt that a small-sized animal can have the same complex functioning as a human’s brain.

…but I just want to say that the book is a fun, Heinlein-ian romp - very well done. I read the next couple of books - also good but not quite as satisfying as the first…

As Isaac Asimov once pointed out, “A mouse-sized man would have a mouse-sized brain.”

In one of his meoir/story collection books, Niven recounts having written the storyline for the Star Trek newspaper comic strip. He invented a race, the Bebebebeque, which were “about the size of a bottle of Haig and Haig” and who were intelligent only in groups.

Well, while we don’t really know the exact kinda limits there are, decreasing size permits less overall flexibility in forming neural conenctions, while presumably having the advantage of simpler connections. So a teeny-tiny species probably couldn’t be as smart as a human-sized brain could potentially be.

You’d think that, if the material used for constructing the brain of your hypothetical tiny creatures had the capability of great complexity within a small space, it might have the ability to have great intelligence. Almost certainly the neural tissue of mammals doesn’t meet this (or else we wouldn’t need such a huge brain, compared to our relatives and ancestors), but it might be possible with some different basic material.

I’m reminded of the aliens in, for instance, Robert Forward’s Dragon’s Egg and its sequel Starquake ) – made of neutron star material (and utilizing nuclear rather than chemical reactions), the creatures had a completely different physical basis than we do, and were much much smaller, fitting intelligent brains into a fraction of the size.

It doesn’t need anything as extreme as that, though. Modern microcircuitry is pretty dense, showing that materials at “ordinary” (for us) temperatures and pressure can still achieve complexity needed for intelligence in smaller space than we use.

Suppose that, instead of neutronic matter or artificial microcircuitry, it were that old SF standby, a (naturally evolved) silicon-based lifeform? Any reason to think silicon-based neurons and/or neuronic networks could be smaller than carbon-based?

Of course, neural transmissions (in Earth lifeforms, at any rate) don’t move anywhere close to lightspeed – so making a brain smaller could make it work faster. At least, that seems to work, to some extent, with electronic computers (whose signals do travel at near-lightspeed).

Obviously you missed it, but that’s what I was suggesting.

I’m not certain that it could be smaller – haven’t crunched any numbers – but it seems possible that using a different basis could get you smaller.

I think this is definitely possible since even now we can probably come close to matching the raw processing power of most animals simply using our state of the art electronics even if we don’t know exactly how to simulate their brains functionality.

But on a more theoretical angle, due to the fact that there’s a maximum amount of information that can be contained in a given volume, technically the answer to the OP’s subject line is “yes, it does require bulk”. You couldn’t even theoretically have a functional intelligence inside a Planck Volume for instace :slight_smile:

What if the information is coded in other dimensions, and the brain only acts as a wrapper/decoder? I know Banks uses this with his Minds and their hyperspace processors, but is this not where we are heading with computers (i.e a given bit having more possible states than 0 & 1?)

Intelligence, as far as we know, requires complexity. Complexity, it seems obvious, requires bulk - at least more bulk than corresponding simplicity. In my mind, that at least implies a lower limit.

The problem is, evolution is inherently messy. Our brains have all sorts of vestigial shit in them left over from our descent from the fish and thus are pretty large. I can’t see an intelligent brain evolving that efficiently. Maybe a genetically engineered brain…

I’m reminded of the animated film Fantastic Planet. On the planet of the Draag, most of the lifeforms are gigantic, and humans transplanted there from Earth long ago live an animal existence in an econiche similar to rodents. The Draag don’t even recognize humans as sentient; without tools or education, an individual human isn’t much more than a clever animal. They only discover their error when a series of accidents lead a band of humans to rediscover technology and civilization, by Draag standards forming a hive mind.

The lower-limit physical requirements of complexity are far less than many have suggested. It remains to be seen whether a divergent evolutionary process could produce more efficiently-sized brains. But even on this planet with non-alien organisms, the brain-size-to-body-size ratio is more important than absolute size (elephants and whales aren’t many times smarter than we are, although their brains are many times more massive), suggesting that you can fit more smarts in a smaller package.

I think the Mandelbrott Set and Menger Sponge give the lie to that notion.

No it does not. The Mandelbrot Set and Menger Sponge can only possess the complexity they do in a simulation - in reality they would run into trouble at least once they got down to the level of atoms and subatomic particles.

Computers are a lot less bulky than they were 40 years ago, yet they are more complex. While it may not be possible for a mouse sized mammal to be as smart as a human, we can’t say with confidence that a smaller intelligence can not evolve. Genetic algorithms often produce unpredictable results which defy the expectation of those running the experiment. If small scale experiments can defy expectations over a short period, it seems arrogant to think we can predict the real world results when the time and scope are much larger.

That wasn’t my point - do you not agree that a Menger Sponge, while less bulky, has more complexity that the corresponding cube?