Storage Space and the Human Brain?

About 30 years ago, or so, computer scientists conceived of an amazing idea, building supercomputers on the molecular level. Using things like DNA. If possible, they predicted, they could fit the entire sum-total of computer intelligence at the time, into the size of a sugar cube. Remember, though, this was 30 years ago. The sum total of computer storage, even in a cell phone now, would probably amaze people back then.

This got me to thinking. The human mind. It is one of the most complex thing, if not the most complex thing, in the known universe. And it occupies the space of, oh I don’t know, maybe a large Big Mac hamburger, let’s say. But is it really the most efficient way of storing all the knowledge it potentially holds?

I recall hearing somewhere, that by 2025, if not before, computers will become as smart as humans. My question: How much space could hold all the mental capacity, of say, Albert Einstein? Or Stephen Hawking? A sugar cube? Maybe even less?

(And BTW, although I have conceived of this question for some time, I honestly don’t recall posting it yet. I will do a search now, though, and get back to you all on that one. But for now, please discuss…)

:):):):slight_smile:

I would say the question is impossible to answer because human memory is “lossy compression” like a JPG image. Rather than store each pixel of data separately, a JPG compression scheme looks for patterns of and remembers the number of patterns and their location in the image. It’s willing to compromise a little by picking patterns that are similar but not precisely the same. Therefore, a lossy compression scheme is essentially unable to record all of knowledge; certainly it won’t get any compression benefit unless you’re willing to lose or corrupt some portion of the knowledge.

So I think you’ll find a human brain stores a lot less original data than you think. If I show you a picture of an elephant you might remember just two things “picture - elephant” using generic “stock photos” rather than the actual picture of an elephant you just saw. You certainly don’t remember each pixel of that elephant’s photo.

You’re mixing storage capacity and thinking / computing capacity. Those are utterly different things.

Einstein wasn’t special for having a large memory; he was special for having unique creativity and insight and reasoning ability.

A person need enough of each to get along. More raw memory isn’t nearly as important to stand-out intellects as stand-out thinking. Which is a mixture of computation, pattern recognition, and connection detection.

Computers have been smarter than humans for decades… in some ways. How do you measure “smarter”? How quickly you can multiply a pair of ten-digit numbers? Computers are great at that. How quickly you can write a proper Elizabethan sonnet? Most people can’t do that at all.

The human brain doesn’t work anything at all like a computer. I don’t know where you got your 2025 quote from, but computers now aren’t really any smarter than computers were in the 1960s. A computer is just a glorified finite state machine. It goes through four basic states: (1) instruction fetch (2) instruction decode (3) instruction execute and (4) write-back. In the instruction fetch stage, the computer grabs a number out of memory and uses that as an “opcode”. There are different opcodes for all of the different things that the processor can do, like add, subtract, multiply, divide, and non-math things like branch to a different address to fetch opcodes from based on certain conditions. For the decode phase, the processor fetches all of the data it needs to do whatever the opcode does. For example, if the opcode is add A + B, the computer now fetches A and B. The execute stage is where the computer executes the opcode, so in the add instruction this is where the processor’s arithmetic logic unit (ALU) actually adds the two numbers together. And write-back is where it stores the answer somewhere.

Modern processors are fancy in that they have things called pipelines. A simple four stage pipeline would be to have a section of hardware that does the fetch, another fetch doing the decode, a third section doing the execute, and a fourth section doing the write-back. Then as soon as the processor finishes the fetch, it passes that on to the decode part, and the processor decodes that while the fetch part fetches a second opcode. Then the first instruction gets passed to the execute while the second instruction decodes and a third instruction is fetched. And so on. Because the pipeline has four stages, it is four times faster than a processor that does each stage one at a time. Actual processors have very large pipelines and actually have multiple pipelines running simultaneously, and do all sorts of tricks like re-ordering instructions to prevent pipeline bubbles. A pipeline bubble is when you have two instructions like C=A+B and then D=A+C. The second instruction needs the first instruction to pass completely through the pipeline before it can execute, because it needs to know what C is before it uses is in the second instruction. This makes the second instruction stall at the decode stage, creating a gap or a bubble in the pipeline where nothing is being executed.

I could go on and on, but the point is while modern processors are very complex, at their core, they are still just crunching numbers in the same old way. They aren’t smart at all, and they aren’t getting smarter in any way. They are just getting better and faster at cranking numbers through their states.

Software is getting “smarter”, but I put that in quotes because software is still just the number cranker doing whatever the programmer told it to do. The smarts are in the programming, not in the machine. But again, the programming isn’t smart. It’s just clever. The program can emulate being smart because a programmer figured out a way to make it crank through numbers in such a way that it kinda looks like it’s smart. But software today isn’t smart at all. It has no self-awareness. It does no thinking at all. It’s no smarter than an old hand-cranked adding machine. It’s just faster and cranks through more numbers.

AI research in many ways is still in its infancy. Neural networks are very interesting, but one problem you run into is that they don’t do much until they start getting complex, and once they get complex enough to do interesting things, the interconnections and inter-reactions are so complex that we can’t understand what they are actually doing and how they do it. They are making a lot of advances in AI, but the problem they are trying to solve is hugely complex.

The human brain is the best pattern-matching machine in the known universe. If you pick up an apple, out of all of the things you have ever experienced in your life, even things that are red and round, your brain still manages to almost instantly identify it as an apple, and you almost instantly know what it should smell like, taste like, etc. Computers, no matter how clever their programming, fail miserably when compared to a human at this task. Google has some pretty spiffy programming that sorts through images looking for things that it thinks are faces. They aren’t even trying to identify the face (as in figuring out whose face it is), all they are doing is trying to determine if it is a face or not. And google is pretty well convinced that the bush in front of my house is a face.

We don’t know how the brain does what it does. We do know that it breaks down information. One interesting case is Kim Peek, the autistic savant that Dustin Hoffman modeled his performance in Rain Man after. One of the reasons that Kim had his amazing recall abilities was that the part of his brain that was supposed to break down things didn’t work properly. So his brain worked more like a computer, just storing and retrieving data. He could tell you an obscure fact he read in a book years ago, but he took everything literally and couldn’t understand the concept of humor. Studying people like Kim Peek gives us important clues as to how our brains work, because in these types of people, certain parts of their brains don’t work, and observing those differences gives us important clues as to what our brain is actually doing.

The way that our brains break things down makes our human brain into horrible data storage devices. We store incomplete copies of things and easily mix data from different things that just happen to be similar. We create false memories easily and can’t tell them apart from real memories.

I think by 2025 we’ll be lucky if we can mimic the intelligence of a very small-brained stupid animal. Figuring out actual smarts will be a huge task, and I really doubt that I’m going to see anything close to that in my lifetime.

It’s not raw number storage that makes Albert Einstein or Stephen Hawking. It’s the interconnections between those numbers that makes them. Computers aren’t developing more interconnections, so they aren’t even on the right path to that kind of smart. Computers are just adding more and more cranks to their number cranking machines.

A dog a cat a turkey and a goose
Walked down the street to Gordon Ramsay’s house
They rattled on each latch and found one loose
Inside they found a startled pantry-mouse.

The cheese was gone but much beside remained
They dined on steak and pomegranate pies
The weight they’d lost was all-too-soon regained
All rosy-cheeked, a twinkle in their eyes.

Upstairs they looked and found no Ramsay home
So lingered they awhile and watched TV.
Inspired by lunch they wrote a dozen poems
Of love, and food, and love for food that’s free.

What hell awaits us thieves when we may die?
Hell’s kitchen may lack pomegranate pie.

Show-off.

I guess I can’t prove that I didn’t write it last week just in case.

Yeah, I’ve always got a couple of pre-written sonnets chambered just for situations like this.

If we’re going to look at things in that reductive way, then the human brain is just neurons firing. No more intelligent than a synapse making a muscle contract.

Yes, it’s difficult to define what we mean by “smart”, but it seems strange to me to define it in such a way that if you give me a black box apparently applying a general intelligence to a task, I can’t tell you if it’s smart without looking inside the box.

Again, by analogy, I’m not smart because I was programmed by evolution, and every skill I learn is just the interaction of that base program with my environment.

Obviously computers cannot learn anywhere near as flexibly as a human yet. But it’s very misleading to imply there is no qualitative difference between AI and learning algorithms now and, say, 20 years ago. There has been a real revolution in the type of problem that computers / software can solve, without the programmer herself knowing what the solutions are.

And on the sonnet again, by the way, it wouldn’t be too hard to program a computer (with a suitable database of English words) to write structurally-valid sonnets. Work a little harder at it, and you could program one to make the sonnets grammatical. Add a little more information to the database, and you could even make them thematic. And a computer so programmed could produce millions of sonnets in the half-hour it took Riemann to write about foodie animals.

And none of those millions would be anywhere near as good even as Riemann’s amateur effort. So who’s actually better at writing sonnets? Well, it depends on how you measure it.

According to whom?

So you two are disputing whether I should be damned with faint praise, or just damned?

And it fit into the trunk of a Buick Skylark.

Article, with photos of jars and jars of brains (no Abby Normal); book Driving Mr. Albert.

The joke of course being that Riemann actually is an AI.

Yes, but in his case AI means Albuquerque Intelligence.

He’s certainly passing the Turing test. Something that even a lot of humans can’t pull off. As a casual glance at YouTube comments will prove. :smiley: