The thing is, it’s not operating algorithmically. An algorithm is a specific sequence of logical steps that performs a computation. Von Neumann computers do everything by executing long lists of instructions. Brains do almost nothing by executing lists of instructions. Neurons fire in certain patterns and that triggers other nearby neurons to fire in other patterns but the sequence is not logical or deterministic.
Imagine you want to sort grains of rice into big and small. The algorithmic way is to pick up each grain one at a time, measure it, and put it in the appropriate bucket. That’s how computers work. The brain is more like dumping all the rice into a sieve and calling what falls out the bottom “small”. It’s much faster but much less deterministic. With the algorithmic method you’re guaranteed that you’ve accurately sorted all the grains. But with the sieve there might be some small grains mixed in with the large that just happened to never find a hole to fall through. Or there might be some big grains that slipped through the a few holes that were drilled a little too big.
A huge torrent of sensory information flows into our brains every second. This data pours through a huge array of different sloppy “sieves” that massage it into a very crude and constantly evolving simulation of reality. This simulation is further tweaked by constant random input from the brain itself – emotions, fleeting memories, urges and desires – and it’s the output of this simulation that drives our behavior. It’s an extremely chaotic system where the underlying rules and triggers are shifting from moment to moment in response to new data (or maybe just minute fluxuations in blood sugar or oxygen levels). There’s no way to boil it down to a fixed set of algorithms.
A lot of it is the hardware. I suck at chess, because I suck at visual pattern recognition. I no doubt could algorithmically analyze a board position, but I’d take ten times longer than a good chess player, and no doubt miss stuff.
I believe I’ve read that many chess playing programs try to quantify the relationship between pieces and thus board position - but this is different from just seeing it, like chess masters do. This is the time to drag out the classic study (> 30 years old now) that showed that chess masters were much better at memorizing real board positions than a control group, but no better at memorizing the positions of randomly placed pieces. Is this something you can teach yourself? I’m sure you get better with practice, but no amount of practice would let me do that. I think chess masters are self selected, from the set of people whose brains are good at this.
I still don’t see how the fact that brains make “single bit” errors means the process isn’t deterministic, though. Computers make errors too. I save a JPEG image, re-open it a week later, and it is subtly changed. Does that mean the process isn’t algorithmic?
Obviously one must possess the aptitude, but my point was that the aptitude is not enough. A Radio Shack TRS-80 can’t run Microsoft Word while a modern PC can. But you still can’t edit documents without having the software installed. You must have a system capable of running the software AND have the software.
The answer lies in chaos theory. Brains are chaotic systems, which means that they’re extremely sensitive to small variations in initial conditions. A tiny variation in sensory input will trigger a cascading chain of neurological events that eventually alter the global state of the system.
So if a bit gets flipped randomly in a computer system it remains a localized error. Some little corner of your jpeg gets corrupted, and that it.
But if a bit gets flipped randomly in your brain the perturbation builds. A random thought about a girl you knew in third grade triggers a long chain of related thoughts and ruminations that can lead you quite far away from your original train of thought.
Now in some strict sense your brain is deterministic. It’s behavior is entirely determined by a particular set of physical and chemical interactions. But because of the extremely non-linear nature of the system it’s impossible to predict how the system will evolve over time.
Because there are any number of things that are too large or complex for us to learn; try to memorize the Internet. We’ve only got so many neurons, after all, and they operate slowly.
As for self awareness, our self awareness is quite limited; most of our mental processes are unconscious. And various studies have shown that a fair amount of what we are “aware” of about ourselves is illusionary.
Just so you know, the JPEG compression algorithm is designed to make (theoretically) subtle changes to the image; this is not an error on the part of the computer. It’s the price you pay for jamming a 100k image into a 10k file.
Excactly on point about algorithms. Our brains do NOT operate algorithmically. Take math. It is extraordinarily difficult for humans to simply add two large numbers together, while a computer many times simpler than a human brain can perform the operation in microseconds. Yet a human brain can do things like recognize human faces, that are very difficult for a computer to do.
Strictly speaking, many programs implement heuristics, not algorithms. Consider a program that branches to something in a jump tablle based on a random number created by a generator seeded by a time of day clock. Not all programs are guaranteed to give optimal results, such as one to find solutions to NP-hard problems in reasonable times.
Face recognition is tough - but even we don’t do it perfectly. It’s somehow hardware related for us, since my face recognition hardware is significantly worse than average. I could never be a politician. We don’t know the heduristics the brain uses yet, but not that long ago voice recognition wasn’t that good either. Now we have systems at least as good as the one that screened Floyd in 2001, if not as good as HAL’s.
But the changes are not consistent. I can save a file, re-open it, and save it again, and it will not have made the same changes as it did before. The file will even degrade by itself just being stored on the hard drive. If we are arguing that a brain’s propensity to do things in subtly different ways depending on environmental factors is evidence of non-determinism, then why would this argument not apply to computers? I’m suggesting that this is not proof that the mind isn’t deterministic.
But is the fact that we are good at certain things and bad at others proof that there is no underlying algorithm? Maybe algorithm is the wrong word, because it seems to have some technical meaning that I’m not aware of. I mean some sort of if/then procedure that is followed. I imagine that brains are bad at adding large numbers together simply because that had nothing to do with survival at the time that our brains evolved. If it were necessary for survival, I suspect we would be extremely good at it. Recognizing faces is difficult for a computer, but it can be done. It’s probably similarly complex for a brain, but is somewhat automatic so that we are not aware of the difficulty. I’m not buying that we’re good at face recognition and bad at math because of some qualitative fundamental difference in how our brains operate. I think people tend to make the mistake of believing that because we don’t make a conscious calculation of a thing, that that thing is simple. But I imagine that, were we able to see the “circuitry” of that brain process, that it would be quite complex. I don’t see any reason to believe that there is not some underlying “program” in the mind that performs the task. Most of us can’t add numbers like that because we don’t possess that “program” in our brain. There are a small number of people who are exceedingly good at making numerical calculations - as good as an electronic calculator.
Nah, I think you’re OK using it that way. (Although characterizing it as simply “if/then” might be going a bit overboard, even if it’s strictly true at the lowest level). Technically, an algorithm is simply something that is computable; that is, whatever can be executed by a Turing Machine (I’d hope that someone corrects me if they think that’s not quite accurate).
Massive parallelization, asynchronicity, etc. don’t affect that; they just make the process more complex, and therefore much harder to predict/control. Which is generally a bad thing as far as engineering and science are concerned.
If you keep on recompressing it, I suppose it will, but that’s because you’re repeating the same compression process on something that is different from the original because of the previous compression process, but even here, the process is still completely deterministic - a sufficiently clever algorithm could predict what will happen to the image after it’s been recompressed twenty times, without actually doing the compression.
If you start with 100 copies of the same bitmap and compress them all using the same JPEG compression algorithm, with the same settings, you’ll get 100 compressed images that are all exactly the same.
I said complete self awareness, not unlimited. We each know ourselves as being unique from others. If I wiggle my cat’s tail in his face, he bites it. He is not aware that it’s his tail.
I’m speaking only to the physical, not emotional. We may not see ourselves as others see us.
However, you do have a point about “unlimited ability to learn” I was unclear. I meant a continuous ability to learn. The vessel is never full.
Exactly my point. Just because something is complex and defies predictability to a certain degree, doesn’t mean it isn’t deterministic.
I opened some old photo files that I hadn’t looked at in years, and they had degraded. Dunno, could have been something wrong with my computer. I’m no expert. Doesn’t matter anyway - it’s not germane to my point.
Yeah, that’s what I was thinking. It’s not that it’s some mysterious, magical process - it’s just sufficiently complex that it defies our ability to model it.
I never said it was mysterious or magical. I just said it’s not algorithmic. Which means that chess-playing programs are not a good benchmark for determining how close we are to achieving real artificial intelligence.
You’re quite right. Any electronic thing has a certain reliability, which can be modeled. Given enough time, bits on your disk and bits in memory will fail - just like neurons do.
It is actually a pretty good analogy to our brains. We can work around failures up to a certain point - then we start forgetting things. If you care enough about a file on your disk, you can build in redundancy so that single bit fails can be repaired - but eventually you will degrade. High reliability memories have a thing called Built-In Self Repair. When a computer is powered up, it does a memory test, and uses spare rows and columns of memory to replace failing bits. (All large memories use redundancy when manufactured, to improve yields, but the repairs are static, not dynamic like BISR.)