The human brain doesn’t work anything at all like a computer. I don’t know where you got your 2025 quote from, but computers now aren’t really any smarter than computers were in the 1960s. A computer is just a glorified finite state machine. It goes through four basic states: (1) instruction fetch (2) instruction decode (3) instruction execute and (4) write-back. In the instruction fetch stage, the computer grabs a number out of memory and uses that as an “opcode”. There are different opcodes for all of the different things that the processor can do, like add, subtract, multiply, divide, and non-math things like branch to a different address to fetch opcodes from based on certain conditions. For the decode phase, the processor fetches all of the data it needs to do whatever the opcode does. For example, if the opcode is add A + B, the computer now fetches A and B. The execute stage is where the computer executes the opcode, so in the add instruction this is where the processor’s arithmetic logic unit (ALU) actually adds the two numbers together. And write-back is where it stores the answer somewhere.
Modern processors are fancy in that they have things called pipelines. A simple four stage pipeline would be to have a section of hardware that does the fetch, another fetch doing the decode, a third section doing the execute, and a fourth section doing the write-back. Then as soon as the processor finishes the fetch, it passes that on to the decode part, and the processor decodes that while the fetch part fetches a second opcode. Then the first instruction gets passed to the execute while the second instruction decodes and a third instruction is fetched. And so on. Because the pipeline has four stages, it is four times faster than a processor that does each stage one at a time. Actual processors have very large pipelines and actually have multiple pipelines running simultaneously, and do all sorts of tricks like re-ordering instructions to prevent pipeline bubbles. A pipeline bubble is when you have two instructions like C=A+B and then D=A+C. The second instruction needs the first instruction to pass completely through the pipeline before it can execute, because it needs to know what C is before it uses is in the second instruction. This makes the second instruction stall at the decode stage, creating a gap or a bubble in the pipeline where nothing is being executed.
I could go on and on, but the point is while modern processors are very complex, at their core, they are still just crunching numbers in the same old way. They aren’t smart at all, and they aren’t getting smarter in any way. They are just getting better and faster at cranking numbers through their states.
Software is getting “smarter”, but I put that in quotes because software is still just the number cranker doing whatever the programmer told it to do. The smarts are in the programming, not in the machine. But again, the programming isn’t smart. It’s just clever. The program can emulate being smart because a programmer figured out a way to make it crank through numbers in such a way that it kinda looks like it’s smart. But software today isn’t smart at all. It has no self-awareness. It does no thinking at all. It’s no smarter than an old hand-cranked adding machine. It’s just faster and cranks through more numbers.
AI research in many ways is still in its infancy. Neural networks are very interesting, but one problem you run into is that they don’t do much until they start getting complex, and once they get complex enough to do interesting things, the interconnections and inter-reactions are so complex that we can’t understand what they are actually doing and how they do it. They are making a lot of advances in AI, but the problem they are trying to solve is hugely complex.
The human brain is the best pattern-matching machine in the known universe. If you pick up an apple, out of all of the things you have ever experienced in your life, even things that are red and round, your brain still manages to almost instantly identify it as an apple, and you almost instantly know what it should smell like, taste like, etc. Computers, no matter how clever their programming, fail miserably when compared to a human at this task. Google has some pretty spiffy programming that sorts through images looking for things that it thinks are faces. They aren’t even trying to identify the face (as in figuring out whose face it is), all they are doing is trying to determine if it is a face or not. And google is pretty well convinced that the bush in front of my house is a face.
We don’t know how the brain does what it does. We do know that it breaks down information. One interesting case is Kim Peek, the autistic savant that Dustin Hoffman modeled his performance in Rain Man after. One of the reasons that Kim had his amazing recall abilities was that the part of his brain that was supposed to break down things didn’t work properly. So his brain worked more like a computer, just storing and retrieving data. He could tell you an obscure fact he read in a book years ago, but he took everything literally and couldn’t understand the concept of humor. Studying people like Kim Peek gives us important clues as to how our brains work, because in these types of people, certain parts of their brains don’t work, and observing those differences gives us important clues as to what our brain is actually doing.
The way that our brains break things down makes our human brain into horrible data storage devices. We store incomplete copies of things and easily mix data from different things that just happen to be similar. We create false memories easily and can’t tell them apart from real memories.
I think by 2025 we’ll be lucky if we can mimic the intelligence of a very small-brained stupid animal. Figuring out actual smarts will be a huge task, and I really doubt that I’m going to see anything close to that in my lifetime.
It’s not raw number storage that makes Albert Einstein or Stephen Hawking. It’s the interconnections between those numbers that makes them. Computers aren’t developing more interconnections, so they aren’t even on the right path to that kind of smart. Computers are just adding more and more cranks to their number cranking machines.