Also, while we’re being strict about it, brute force isn’t guaranteed to give us a humanlike brain, because some of the finer points of how neurons work is still up for grabs.
If any of the quantum theories of the mind are accurate then no amount of brute force – with our existing models – is likely to produce something that works similarly to a human mind.
My own personal hunch though, is that brute force methods will bear fruit, since any kind of emergent intelligence would be a huge breakthrough and I doubt emergent intelligence requires organisation exactly like brains. And that “AI Real” will happen so soon after “AI Lite” that to most people they will hear about it as one event (because software is so much easier to reverse engineer than squishy matter).
But trying to reverse engineer the simulated brain would not be the same as reverse engineering software. You would be reverse engineering the very same connections, weights, etc. found in the brain. One of the problems with neural networks that are evolved to solve problems, is that it is extremely difficult to understand what the weights and connections represent, even for a very small neural network.
Really the best thing that would come for it is brains with savestates and fully controlled input. Of course, it could come out completely incomprehensible still, but the ability to say “if we give it <x> input from <y> state <z> times what does it do each time? Okay, now what about giving it <x> input from <a> state?” would be potentially invaluable, since it would give us a concrete input/output channel we could try and play “match the pattern” with.
But my point is we can say with virtual certainty that classical computation power won’t continue to grow exponentially until we can simulate a human brain from the level of particle physics as you suggest. We just flat-out don’t have the physical resources.
We might as well say if the population of human beings on Earth continues to increase at the current rate then in such-and-such number of years the biomass of humans on Earth will be greater than the mass of the Earth itself. Except that’s silly because we’d run up against obvious physical limits long before that happens.
But you’re the one who brought up the Standard Model of particle physics in the first place! That’s the whole point I’m arguing against. Basically you seemed to be saying "It doesn’t matter that our understanding of the detailed workings of the brain is still quite limited, because we have a detailed understanding of the underlying particle physics. My point is there’s such a huge difference in scale there that that’s flat out never going to work.
If you want to talk about “infinite” computing resources, then we’ve crossed over from serious speculation about the future to realm of bad science fiction. And hey, I like science fiction (even “bad” science fiction). But it doesn’t belong on the cover of news magazines. The fact that people are actually being mislead into thinking that this sort of thing could happen in their lifetime or their children’s lifetime just ticks me off.
I’m well-placed on this subject, as I’m a software engineer with a graduate degree in neuroimaging.
Reverse-engineering software is not easy, but we have a wealth of tools and techniques available that simply can’t be applied to the living brain.
It’s like the difference between trying to study an earthlike planet now, by measuring the perturbation of its parent star, versus being able to teleport there any time we like.
And if anything that analogy underplays what a difference it would make.
This is by no means obvious to me, and you haven’t put forth a convincing argument.
What physical limits are we bumping up against in this case? You are just speculating. What’s your argument?
I also described what assumptions we can make, such as ignoring QCD, high energy scales where partons are important, etc…
I think there is some pretty good science fiction that very compellingly describes how we might have such enormous computing resources. Again, as far as you have presented your argument so far, you seem to be merely speculating…
I’m not sure that we can simulate quantum mechanics perfectly, but in any case I’m referring to quantum mind theories, popularized by such theorists as Roger Penrose in The Emporer’s New Mind.
Basically they don’t have a theory for how quantum effects might affect thought processes: there’s just a hypothesis that phenomena such as superposition and entanglement play an important role.
I have a strong suspicion that this stuff is the just the soul in drag. We see a clear evolutionary pathway from primitive bundles of nerve cells to the brain. When did the quantum entanglement stuff come into play? Is it present in fish, or only in conscious entities? How did we evolve to make use of it.
There seems to be tremendous resistance, certainly by Penrose at least, to the concept that our consciousness arise from very complex interconnections of relatively simple objects.
Penrose’s main objection is that (in his opinion) there seem to be entire classes of mathematical truth (which our brains can somehow recognize) that can’t be reduced to algorithms. And even if you presume the brain somehow processes information in a manner very different from a computer, if the brain functions purely according to classical physics then in principle the underlying physics itself should be reducable to algorithms- the very topic debated here. Penrose and Hameroff hypothesize that the quantum mechanics comes into play at the cellular level, within the neurons themselves.
I agree. I’m not a proponent of the quantum mind theory; I just mention it in passing to show that not everyone agrees that brute force approaches would eventually simulate something like human thought (using models of known physics anyway).
I do happen to be a subscriber to the “hard problem of the mind” however, so if you were to ask me whether brute force approaches will necessarily create consciousness then that’s a very interesting question for me…
But it would be off-topic, since we’re talking intelligence in this debate, and there is no reason to assume intelligence requires consciousness. Also we’ll probably waste 10 pages on whether there is such a thing as subjective experience. :rolleyes:
As far as I can tell, the non-woo-woo examples of “quantum mind theory” all allow such brute-force approaches. Quantum mechanics may be a necessary ingredient in simulating brain function, but luckily we know quantum mechanics – it’s not a big deal. Then there are some (extremely non-mainstream) opinions that quantum mechanics is incomplete without taking consciousness as an axiom – for example that consciousness causes wave function collapse. As mentioned earlier, these examples are really “just the soul in drag”…
My only experience with quantum theories of the mind is John Searle summarizing Penrose’s theory in The Mystery of Consciousness. That description implied that Orch-OR would require new physics. But Searle could have misunderstood.
As an aside, if it only requires known QM, then can it necessarily be modelled? I mean, obviously individual quantum phenomena can be modelled…but can all aspects of QM be modelled at the same time within one model?
I think that’s right, that Penrose’s theory requires some new physics. So that is one that’s non-woo-woo, though it’s not exactly well regarded either (mostly going off of Wiki for that opinion).
Under the (widely held) interpretation of the wave function’s probability distribution, wave function collapse can be modeled using random number generators. That is the only possible stumbling block I am aware of. But in known QM, consciousness does not play a role in wave function collapse within the brain. The assumption would be that the if wave functions collapse in the brain, we can treat them the way we treat any other wave function collapse that we can measure in the laboratory: purely random state reduction following probability distributions determined by the wave function. And if you don’t want to talk about wave function collapse, then even better: all wave functions in the brain evolve according to the Schrodinger equation.
Just another example of how much we still have to learn about how the brain works. Kurzweil just doesn’t understand the magnitude of the problem to be solved.
If you had paid attention you would had noticed already that Kruzweil is only good for “way out there” neat ideas on the subject, serious researchers that are investigating this issue are very critical of Kurzweil. For example, Jeff Hawkins dismissed in the past the Kurzweil idea that we could upload our minds soon, that is not what the point of understanding how the neo cortex in the human brain processes and stores information, and the goal of creating intelligent machines.
Just to play Kurzweil’s advocate for a moment, I don’t think he is that concerned about how complex the problem is. His point is that if “progress” (complexity, knowledge, etc) continues on the exponential curve he claims it has followed, uninterrupted, for as long as there has been life on earth, it doesn’t much matter. If the task is 10X as complex as his initial estimate, maybe the date will be off by 2 or 3 years. If it’s 100X as complex, maybe 5 years. And so on.
Are we solving the problems of AI at an exponential rate? That is a very different problem than increasing computing power and the technology to scan brains, etc. Hard to say what the rate of true AI knowledge gain is, but based on what I see/read, we are solving the problems and gaining understanding at a rate that is substantially less than what Kurzweil has predicted.
I just read another article about a new experiment they are doing to try to figure out the worm brain. It’s only 300 neurons, 5,000 connections and they have had it completely mapped for a long time. In this new experiment they are going to use a special laser to target individual neurons while the worm is alive and moving around, they hope this will help them start to piece together what is going on. Given the level of research effort surrounding brain biology, and the slow progress on the worm, how long will it be before they even figure out the worm?