If the average high school (or college) educated person’s brain were a computer, how much RAM would it have? What would its processing power be?
The architecture’s too different to make a meaningful comparison. We only run at about 3-4 MHz, but we have billions (if not trillions) of processing units.
As far as the RAM goes, I don’t know. I don’t think that there’s anything that corresponds exactly (maybe short-term memory?).
Sorry, I meant to ask about hard drive space, not RAM. I got distracted when writing the OP and inadvertently confused the terms.
What ultrafilter said. The way a mind remembers is fundamentaly different from a computer’s RAM or HD.
Nah, there has to be some form of long-term storage, and it’s entirely reasonable to ask how much it can store. Any answer that we can quote at you this century is a pure guess, but there is an answer.
I would have to agree that the structure (and purpose) of the two systems are too dissimilar to allow meaningful comparisons in terms of storage (memory). What do you mean when you say that you store a file on a computer? I would think that the reasonable standard is to say that the file is stored if you can reproduce it at any time exactly, i.e., bit-for-bit. But, the world we live in is largely analog. To say that we remember a particular event (even a very recent one) is not to say that we could reproduce everything about the event exactly…we generally have a vague notion about what happened, and that’s all.
Think about one of the most powerful memories you have. Now imagine that I showed you a picture that was an exact image of what you saw at that moment in time, and I showed you another picture of the same image but just shifted to the side by a very small amount. Almost no one would be able to do better than chance at picking which image is the right one. But, we would never accept such a standard of “memory” in a computer, where we couldn’t even identify the correct file when it was shown to us.
So, how do you compare the two ways we “store” things? The vagueness of even our best memories doesn’t lend itself well to description in terms of bits, which is the only metric we have to work with on a computer.
I would say (as far as memory capacity of the human brain in terms of gigabytes etcetera) that there’s a real answer, but that there’s no effective way of measuring or even guessing it with the knowledge of the brain that’s at our disposal, for the reasons that crozell and ultrafilter mentioned. :]
Cann’t we remember 7 bits of memory in our short term memory, or was that 7 numbers? So for RAM I guess that would however much RAM is needed to store 7 numbers.
One comparison we could make is between the human eye and a digital camera. The human eye can detect a bit over 16.7 million different colors (24-bit color depth), and apparently has a resolution of about 200 dpi at close range. (This means that you would be able to distinguish individual dots in a grid if there were fewer than about 200 dots per inch, again at a range of about 12"). This probably ends up corresponding to somewhere between 12 to 24 megapixels for the whole visual field. Note that the human visual field is substantially larger than a camera’s, so it needs a higher number of pixels to provide the same resolution. It’s a better sensor than even a professional digital camera. (Comparing the human eye to a traditional camera lens, one estimate I found said the eye has a focal length of ~22-30 mm, and an f-stop of about f4 in darkness and f30 in bright light.)
The brain is not laid out like a computer; it performs its calculations very differently. It performs a huge number of ‘instructions’ simultaneously. It is very much capable of multitasking – though the conscious part of the mind can generally only perform one broad task at a time, the task involves a number of different sub-tasks such as controlling muscles and processing vision. And it must continue to provide basic life-supporting instructions at all times. It also operates at a voltage so low it would make Intel envious – but then, if you could make a notebook powered by sugar you wouldn’t have to worry about battery life.
Regarding memory, the brain also doesn’t work like a computer, with images stored with JPEG compression, audio in MP3 or WMA and ASCII-encoded text. It lacks the fidelity and reliability of a hard drive, but access is instantaneous, and it can probably store far more raw data than a hard drive can. (A 100GB hard drive would fill up rather quickly if you used it to store uncompressed 20-megapixel, 24-bit images.)
I am curious to know why you feel that there is a real answer in terms of bits out there (that we just don’t know yet). If we aren’t representing digital data in our memory, how do bits have any meaning here?
There are a finite number of neurons, and so comparisons with digital logic are tempting. But the more we learn about them, the more we understand that they are not binary elements. There are very subtle interactions happening, and for some units, behavior can even change with time. They use analog signals (though likely with some sort of point process communication happening), and the binary paradigm just doesn’t seem to fit.
There may be special cases (e.g., telling someone to remember a telephone number) where a bit interpretation could apply, but that situation hardly characterizes how we normally use our memory in the day-to-day.
Hans Moravec at CMU has given this a lot of thought.
He thinks a computing power of 100 Million MIPS ought to do it. A Vax 11/780 (the workhorse minicomputer of the 1980’s) is about 1 MIPS. A 1.5 Gig Pentium seems to be about 1500 MIPS according to this – the average computer user has a shocking amount of computing power at their fingertips from a standpoint of 1980’s computers. Anyway, it looks like you’d have to network 30 or 40 thousand 3GHZ Pentiums together right now to meet the 100 million MIPS mark. But that’s the whole enchilada, including vision and all other sensory processing. If you want just the logical reasoning part, which is pretty recent, you could get by with much less.
Moravec thinks that with advancing processor technology, we should be able to economically emulate human computing power sometime in the 2020’s. This is based on an extrapolation of the rate of processor development now, and may be optimistic. (In order to keep it up, Intel (for example) is going to have to convince users that they need processors with many tens of thousands of MIPS on their desktops. Fortunately, there will always be releases of Microsoft Word to drive demand.)
I don’t agree with a lot of this. Most people have a difficult time remembering a couple of new phone numbers. Furthermore, most people have very little capacity to remember detailed images. Probably the best way to look at human memory in terms of computer memory, is that the human brain has a built-in form of extremely lossy compression. In other words, what the human brain remembers is almost never remembered without data loss. The human brain can remember things like entire languages, but that is only because languages are constructed in a way that minimizes the loss due to compression.
Also, access isn’t anywhere near instantaneous. Ever played a trivia game?
I imagine that the information theory concept of a “bit” could still be used to compare the information content of memories to storage capabilities of a computer (at least theoretically). A bit, in this sense, is a measure of how much information is containted in a single event. A coin-flip event has one bit of information associated with it. An example that usually helps is to consider the english language. Although we have 26 letters to choose from, the average information content of any given letter is only about one bit. Meaning that fully half the time, given a string of letters that make up part of a valid English phrase, you can correctly guess what the next letter should be.
Randomly selected strings to support this claim (googled for “random words,” then closed my eyes and selected text)
“the programme notes we’re told that this play is partly based on a five-episode, 290-minute television adapta”
“analysis techniques, such as SpamBayes and SpamAssassin. These filters examine incomin”
etc. It’s clear that the next letters are “T” and “G”, respectively. Obviously this isn’t exhaustive, but try it yourself and see how often you can know what the next letter will be. It should come out to about half the time.
You could do a similar thing with memories. Even if one cannot distinguish between two very similar images, there certainly are many images that one could identify as not being representative of a given memory. Using a large enough statistical sample, you could determine how much informational content there is in an average memory of a single image. Extrapolating that out to the total data storage of the brain is left as an exercise to the reader.
A lot of people try to apply information theory to studying neural systems (me included), but there are a lot of pitfalls. There are a lot of subtleties that are very important, and some applications outside of what it was intended for do not always make sense.
That being said, you are talking about two different paradigms. The examples you give are fundamentally discrete, and common info theory can be used to compare some aspects. Extrapolating those examples away from the digital world would lead you to conclude that there is infinite information involved (i.e., it would take an infinite number of bits to describe an analog signal).
Your example about being able to rule out SOME images as being relevant still leaves an infinite number of images left as possibilities. You simply aren’t geared for trying to exactly reproduce details, and most of those details are not discrete in nature anyway. Our memories are a combination of abstract notions (I might remember that I was in my office, but not exactly where I was standing) and emotions (which can only reasonably be described on a continuum).
If you’re familiar with information theory, this (to me at least) is like the difference between common information theory (digital communications) and rate-distortion theory. The stuff that everyone talks about is digital comm, which has to do with compressing bits and getting them from place to place. Rate-distortion theory (in the general sense, not the special case of source coding) is about analog communications…it accepts that you could never do it perfectly, but it lets you know when you can achieve your goals for an acceptable distortion. RD is really beautiful, but it can be difficult to understand and it’s not as common as digital comm. Check out the end of Shannon’s paper, or the review by Kolmogorov if you’re not familiar with it. I personally think RD theory is the more appropriate paradigm for neural communication, perception, etc., but because you are not trying for perfect reconstruction, it’s just not in the same ballpark as the typical info theory and so it’s difficult to compare.
Long answer:
Another reason why estimating the storage of our brain is impossible is there is a lot more than pure data stored there. Memories are linked to other memories. If you program, consider how much more memory is required to store a string and links to hundreds of other strings than just the string itself. Now, if you wanted to link and you decided to do a search, the storage would be less. How does our brain work? I don’t think we know, so until we do it is going to be hard to say how much storage we have.
There’s also some sort of redundancy, so that adds more equivalent storage still. You can trade off cpu cycles vs. storage, as the compression examples showed, so until we know how the brain does the tradeoff that is another level of uncertainty.
Short answer: billions and billions of bytes.
Here’s an analogy to think of how the brain works:
Recently, an independent team released a game called .KKrieger. Its graphics were about equal to Quake 3 level (the previous generation of graphics engines), and had pretty hefty system requirements. However, the entire game - textures, sounds, level information, game engine - was in a file only 96 kilobytes in size. The game worked by following various programming that directed the recreation of the game content into RAM each time the game loaded.
Our brains work on similar approximation scales, logging away repetitive “extraneous” information into a general category. Thus, when you have a memory of seeing, say, a tennis match, the crowd in the background will probably be listed in your brain as “genericcrowd1.jpg” (or .gif if you want some simple background animations). The meat 'n potatoes of the memory, if you will, the tennis match, that will be specific, but sitll, some details - like “clothes”, or whatnot - will probably not be recalled thread-for-thread…
Here is an interesting subtlety of the human mind:
I hvae no dbuot that all of you will be albe to raed tihs. It trnus out taht olny the fsrit and lsat ltretes of wdros need be in the ccerrot oerdr. The rset can be aumtialtcaloy rgnrzieeoad by the biran. Wroks wtih any pasgsae of txet! Porsime!
That is pretty strange, every time I see it. It seems I read it faster then if they were spelled correctly.
I’ve seen that before too. It’s cool. Your brain also does something similar with visual stuff. When you see your wife, your kids, the house, car, or anything familiar to you you just see bits. The brain kinda assumes the rest. You can go out to your car at work, start it up and drive home, and go inside without actually seeing very much. As long as nothing changes, that is.
Computers, so far, can’t even come close in filling in information.
Peace,
mangeorge