The Addressable Brain

I have been toying with a storyline for a science fiction story: several thousand years in the future, all branches of human knowledge have advanced so greatly that no one person - howsoever smart - can master even a single branch of knowledge, due to the sheer volume of knowledge the brain must receive and assimilate. Society’s technological progress largely comes to a stop, as we can barely manage what we already know. Machine-based AI has been proven to lack a genuine creative intelligence, is merely capable of acting upon deluges of data; for example it could never figure out Relativity on its own. The society therefore decides to create an “addressable” organic brain into which knowledge can be just placed from an outside system.

At this point, I decided to do a reality check.

How far are we from a situation where knowledge becomes so vast it is humanly impossible to master even a narrow hyper-specialized branch?

Is it right to categorize AI as devoid of human genuis / creativity? Could an AI spontaneously figure out e=mc2?

Can an organic brain be made “addressable” in the sense of my storyline?

I suppose if an organic brain could be grown addressable and embryonically implanted into a baby, the baby would no longer be considered human, IMO.

I’m only going to tackle the first two questions:

I think this is pretty unlikely to happen - how could a situation like that occur gradually? It would require existing experts in their fields to be laying down knowledge that they themselves did not comprehend.
If a person can discover something, another person can learn about the discovery.
In the history of invention and technology so far, every time a field grows too broad for individual experts to know all of it, it divides into smaller specialised units - that has already happened countless times on its own.

I don’t think it’s right to rule it that way, but you’re writing fiction. You get to decide.
Red Dwarf (set initially near the end of the 22nd century) had the premise that humankind had abandoned digital media as worthless and gone back to VHS. If you want to rule that AI will ‘stay dumb’, then do it.

I think it’s necessary to rephrase the question.

Because what you call a “branch” is arbitrary. Human knowledge has been bigger than one human could master for all recorded history and we subdivide as necessary.

But, we easily could get to a point where progress requires pulling together observations from different strands and no-one is capable of doing that.
There are fields right now like e.g. nanotechnology that require an understanding of specialized parts of chemistry, physics and in some cases biology, that until recently few people were trained for.

I could conceive of a hypothetical future where progress is slowed by waiting for the random chance that someone happens to be an expert in topics X109 and C711 and notices a way they are correlated.

My first thought was the educator tapes from the Sector General series, but there are probably a significant number of other examples–We Can Remember For You Wholesale by Philip K. Dick (the inspiration for Total Recall) comes to mind. There isn’t much new under the sun.

In Asimov’s story, the shtick was that education tapes enabled humanity to churn out the vast numbers of technicians, engineers, laborers, scientists, etc. required to sustain a massive interstellar civilization, but because none of them had gone through the learning process, they became competent at best and unable to think creatively or combine and build upon existing knowledge to create something new. To get the small number of people capable of assimilating new material and advancing beyond textbook problems, motivated individuals had to be educated in the traditional way.

This is fiction, of course, but the idea is that it was not a matter of merely filling the storage capacity of a human brain, but to reinforce its connections. A smart person will quickly realize he or she is missing something, be it in his or her own special field of expertise or one more distant, and proceed to figure it out.

I don’t see why there’ll ever be a need for this to be implantable knowledge (a desire, I can see), we’re already good at handling external knowledge stores (Stewart & Cohen’s Extelligence) and that’s how current high-speciality knowledge fields currently work. Plus there’s no need for one person to be a repository of knowledge - groups of humans interacting are how knowledge actually advances. E=MC[sup]2[/sup] wasn’t some ex nihilo attack of inspiration particles on Einstein’s brain. The idea of mass-energy equivalence was around for years, in aether theories. Not saying Einstein *wasn’t *an exceptional genius, mind you. But he was the genius he was partly because of other people, not just because he had some vast knowledge tap feeding into his brain…

I’m thinking a scenario like that in, “Ghost in a Shell”, is probably the most likely. We facilitate our human brains, which already contain the ability for creativity, with cyber enhancements that increase our mental capability and efficiency.

We’re in a golden age of communication and information. We’ve already come a long way in addressing the problem of having an ocean of knowledge by simply making it available at the touch of a few buttons.

As late as the 18th century, one individual could be familiar with most if not all medical knowledge; there was so little to study.

If we visualize the development of knowledge as a fractal, branching into ever more specialized modes, the amount of effort and mental resources required to master any hyperspeciality (the tip of the fractal) from the basics (the trunk of the fractal tree) will eventually become prohibitive.

I was actually referring to currently extant AI, not my hypothetical AI which I will make dumber than the ‘human’ protagonists.

To some extent, it’s actually the opposite: Thousands of years ago, the body of human knowledge became too large for one organic brain to handle, and so we invented a sort of indexable artificial intelligence that we could refer to as needed.

Sure; that’s not inconsistent with what I said though.

[quote
If we visualize the development of knowledge as a fractal, branching into ever more specialized modes, the amount of effort and mental resources required to master any hyperspeciality (the tip of the fractal) from the basics (the trunk of the fractal tree) will eventually become prohibitive.[/QUOTE]

Yeah but branching doesn’t necessarily mean lengthening. You can fit an infinite fractal in a finite box.
And there are other reasons why I don’t think it’s likely to be the inhibiting factor for human progress.

Continuing with the metaphor, imagine there’s a branch with a million sub-branches. Well then being the expert of that parent branch is probably a lucrative area for an individual. Being able to give others the tools and advice to navigate to the end of that branch quickly is very useful to society.
If humans really cannot find any way to improve how quickly a person can learn the parent branch, then parts of it can become black boxes (metaphor starting to creak now); some people can be the experts of the black boxes and others can simply build on them.

Finally of course in the future we might expect the human lifespan to be extended indefinitely. So the thing of what one can learn in a lifespan just becomes a function of how much we can augment human memory / create AI. And that’s a different kind of story / limitation than the one you’re depicting.

Yes, we invented books, but they did not help much in some instances. Case in point: the Hindu scriptural base consists of well over a million scriptural texts. Most existed at some point in book form, but there are no extant practitioners / scholars of a vast majority of these texts, simply because mastering certain texts required prior mastery over other texts/concepts. As a result, these texts are either extremely obscure today, or have disappeared altogether - loss of knowledge due to an inability to humanly imbibe knowledge readily available in text form.

A similar loss of a body of knowledge could occur in the future for scientific fields too, because mastering something requires mastering a prior branch of knowledge, which in turn requires mastery over a prior branch, and so on, leading to saturation of the human capacity to assimilate and apply knowledge. It would not help if everything were documented and extensively cross-referenced.

Enter the addressable brain…

This metaphor does not really work. Say I wish to bridge Relativity and Quantum Science, leading to the development of a new hyperspecialized branch of knowledge - relativistic quantum science. I have to know both relativity and quantum science to do what I wish to do. It does not help if I have access to a relativity expert and a QS expert. The development of the “relativistic quantum science” branch of knowledge is stymied because of human limitations in mastering the knowledge required - it did not help that I had access to a vast body of knowledge and experts in their respective knowledge silo.

This is true, but it just means that you need to bring three experts together, not two: The first is an expert on relativity, the second is an expert on quantum mechanics, and the third is an expert on forming collaborations between disparate experts.

Or you can do it the way it’s done in the real world: You start with a thousand experts each on relativity and quantum mechanics, and each one learns a little bit of the other’s discipline. Whenever any of them finds any common ground between how they work (a way of setting up the math in their own field that’s a little more compatible with the other field, say), they share it with all of the others, until eventually you get a relativity guy who does his calculations in a quantum-ish way and a quantum guy who does his calculations in a relativistic-ish way meet in the middle (and in the meantime, they’ve both pushed their expertise in their own field even further than it was before).

”Relativistic quantum science” exists; that is quantum field theory (quantum electrodynamics or QED for the interactions of light and matter, and quantum chromodynamics or QCD for quarks and gluons). Now, it is true that a full unification of general relativity (Einstein gravity) and quantum mechanics is not complete because we don’t have a workable quantum field theory that encompasses gravitation, and even in the case of complicated QED and QCD problems we cannot model the system with enough precision, but these are exactly the sort of problems that synthetic cognition can aid us, because the complexity of a workable effective field theory is probably too complex for a human brain to interpret.

To address the questions of the o.p., we are already at the point where it is essentially impossible to possess all of the working knowledge in a given field of specialty, hence why even within fields of medicine and physiology like cellular metabolism there are people who are expert at some particular aspect such as mitochondrial function or nutrient transport. We deal with this via one of the oldest of ‘social media’ technologies (collaboration) but using the most advanced methods of communication (electronic journals and email) which can share ideas across the globe much faster than anyone can even process them. In this way we actually engage in a form of telepathy, albeit not the fantastical sort of science fiction, but miraculous nonetheless.

Speculation about what machine intelligence will be able to do is exactly that; speculation. Current approaches to machine intelligence and cognition (self-learning neural networks) are only very rough approximations of what the human brain does, and they are not architected anything like the human brain in a functional sense. I think that even when machine intelligence begins to display independent reasoning and thought (which it arguably already has in a very restricted sense by coming up with novel strategies in playing games like chess or solving physical problems) it will not reason like human creativity. Could a machine intelligence derive physical principles such as the mass-energy equivalence of special relativity? Certainly; given basic axioms and data, a sufficiently complex system should be able to rederive that easily, and in fact, that is a pretty straightforward principle to deduce with the right information. It took humanity so long to come up with it largely because we didn’t observe relativistic effects in everyday life and it took complex experiments (Kennedy-Thornedike, Michaelson-Morley, and Ives-Stillwell, and observations of astronomical phenomena) to demonstrate the essential phenomena of special relativity.

The idea of an “addressable” human brain is somewhat more complex. The way the brain stores memories is not by collecting bits of data in some kind of fixed memory, but by creating preferred patterns of potentiation which are linked to the senses that caused the memories to form. Thus, those memory patterns are stored in different areas within the brain, and recalling them is actuallly a matter of essentially reliving them, which also means that they way they are recalled can alter or conflate those memories with others. Being able to access memories within the brain would require a comprehensive mental model of the brain in question, and it is dubious that we could ever simulate that on digital hardware or even directly transfer one set of memory patterns into another brain short of actually constructing a duplicate of the original brain by some process that is well beyond the state of the art.

Implanting an adult brain into an infant, if that were possible, would be an interesting experiment to say the least. While we are often thought to consider the brain as some kind of independent organ that plugs into the body via connections to the spine (not including optic and olfactory nerves that connect directly into the brain), in truth the brain requires sensory stimulation and will synthesize it if none is available. In this sense, the peripheral nervous system is really as much a part of the cognitive experience as the thalamus, cerebellum, and cereberal cortex. An adult brain connected to an infant peripheral nervous system would not get the kind of controlled stimulation it needs and would probably be oversaturated with unfamiliar stimuli. Setting aside that issue, whether you would consider such an organism as human is a matter of definition; most people would consider that kind of alteration to be “posthuman” or “transhuman”.

Regardless, the human brain has some fundamental biophysical limitations in terms of how fast it can process information, what kind of sensory information it can visualize or reproduce, and the complexity of insights it can communicate, and simply making a brain with “addressable” memory, even if possible, wouldn’t change that. We use computers today to augment those abilities, both in the trivial but laborious tasks of basic computation, and in the complex tasks of data visualization and pattern recognition, which are beyond the means of even the smartest people to do. And it should be understood that much of the practical applications of physics today are vastly beyond the brain to resolve in a quantitative fashion. Applying the Navier-Stokes equations to a real flowfield or calculating stress in a complex geometry with combined loading is not something we can do even using pen and paper, even though the individual calculations are not complicated.

However, if you are writing fiction set several thousand years into the future, I wouldn’t worry too much about what may or may not be feasible given our current understanding of physics and cognition. We’ve advanced our knowledge of both with many revolutionary jumps compared to even a few hundred years ago, and no one can say with any certainty what ‘impossible’ problems may be resolved dozens of centuries hence. Rather than trying to tailor your story to make it plausible to existing and proposed science, use it as a springboard to examine the potential impact upon humanity assuming it will become feasible. Very little science fiction from the past has stood the test of time based upon its predictions of future technology, but the best of it holds up despite the implausibility of its conceits precisely because of how well it investigates human nature and society in response to technological change and scientific discovery.

Stranger

I’d say current AI is already capable of tasks that, if they aren’t actually ‘smart’ or ‘creative’, are pretty damn good facsimiles thereof - deep learning networks can ‘understand’ things and ‘invent’ other things - and AI assistants are already at the point where they are better than humans at finding relevant stuff.

Given sufficient opportunities of trial and error, and sufficient control of the real world, I see no reason why current or near-future AIs can’t be capable of something that could reasonably be described as ‘innovation’

I’m reminded of the “cerebrostyle” in Sturgeon’s “Venus Plus X” which (as the name suggests) writes knowledge into a person’s brain.

Jack Kirby already figured out what would happen if you tried to mutate a human brain to create a super-intelligence: