I first heard about Kurzweil from Spectrum and dismissed him based on the criticism in that article. However, I ended up looking into his predictions, including his own evaluation of them from his website, and I ended up believing that it’s actually the author of the Spectrum article that doesn’t get it. It’s a combination of too easily dismissing correct predictions as being obvious, and rejecting technologies that do exist but are not in mainstream use.
This is missing the point. With the premise (one I don’t necessarily agree with, btw) that exponential technological growth continues without running into physical limits, then in a few decades the available computational power will be so extreme it will be trivial to simulate the human brain atom-for-atom by brute force. The extreme nature of exponential growth makes most software issues irrelevant.
Not really. Even if we had sufficient computational power we still wouldn’t know how to simulate a human brain in software. We have some theoretical understanding of how the human brain works, but there is a truely vast gap between this and the engineering knowledge required to duplicate the wokrings of an actual brain.
Did you miss that part where I said “atom-by-atom”? The only innovation needed is technological.
We don’t know how to measure the brain atom by atom.
Well, that’s the $64,000 question, I think. Once you have a computer that equals or exceeds the computational ability of a human brain, does it start thinking for itself?
(Which raises the question, what does “thinking for itself” even mean?)
Also, is it even necessary for a computer to simulate a human brain in the first place? Can a computer with sufficient processing power achieve sentience in a completely different manner than how the human brain operates?
The premise: exponential technological growth. Measuring the brain atom-by-atom or computationally simulating the growth of the human brain from DNA and our understanding of genetic chemistry is likely to be trivial.
also, (and I’m just guessing here)–it seems illogical to copy something “atom-by-atom”. At the level of individual atoms, wouldnt the effects of quantum uncertainty be so large that you would no longer have a true copy? You would be creating something similar, but not identical.
And, even if you did copy a human brain successfully, all you would have is…a human brain. We already have 7 billion of those, don’t we?
If this counts as changing my body, then it is nowhere near the impact of chemistry. Computers improve the quality of our tools, and improve their scope, but are not game changers. You might as well say computers changed our food by making cheap microwaves practical.
I had a pen pal from Togo in 4th grade - no computer required. The real change to my views was living in Africa, not corresponding to someone in Africa.
Project management software mostly takes its methods from techniques developed for the space program, which while not pre-computer was not computer dependent.
I have teleconference and web-based meetings some times and face to face meetings others for a committee I am on, and it is quite clear that we have not changed - the face to face meetings are very different.
You might be forgetting that the revolution in Egypt mostly ran without the benefit of the Web. The authorities must have thought, like you, that revolutions can’t happen nowadays without Twitter. They were wrong.
Every innovation changes the way people live and interact. But to the level of a singularity? Not even close.
Maybe. But one of the AI texts we used when I took AI in school was from 1959, and we haven’t made much fundamental progress since then. I’m betting on brain simulation myself.
So we’ll have enough computing power to run the simulation, but someone’s still got to create the simulation. Which atom goes where to begin with? That’s nearly as hard a problem as just creating an AI from scratch.
Nor anything else – do we?
I am not seeing how building a brain atom by atom would accomplish anything. I mean, we can get a real brain today, already fully formed, with all the atoms intact…but if you cut it out it stops functioning. Building a new brain atom by atom would just get you a non-functional brain, whether you built it directly or simulated it. Making the brain work requires understanding how it functions, not how it’s composed.
Personally, I think that AI will come about when we can figure out a way to allow an AI to learn and grow. It won’t be a human intelligence at all, but something completely different, that uses the strengths and weaknesses of it’s own medium (i.e. an electronic computer) instead of something trying to simulate how our own intelligence works in a medium totally alien to our own.
-XT
But even that (which is far beyond any capability we are ever likely to have in my opinion) is only a small part of what makes a working human brain. A human mind cannot exist in isolation, it develops in a cultural context. To take one small example, if a child is not exposed to language in their first few years of life they cannot learn how to speak at a later date. We know this from studies of feral and deaf children. Learning is a process that alters the actual structure of the brain. As an infant ages, the number of connections between their neurons decreases.
I don’t see why it would. Our minds are the end product of millions of years of evolution, and thousands of years of cultural development. What would induce a machine of greater-than-human complexity to start thinking? The only pressure we know of that can do this is evoultion.
Possibly. If you were to take a community of complex learning entities and place them in a virtual environment, and leave them for a very long time, some interesting behaviour might well emerge. But it might not be anything we could recognise as sentience.
It’s worth remembering that the first quarter or third of a bell curve is a close fit to an exponential curve. But rather than continue to infinity the curve eventually flattens out.
I would say that we’re little closer to creating artifical intelligence from first principles than we were in the 1960s. Maybe we’ll brute-force simulate a brain, or evolve AI by genetic algorithms without understanding why it works, or maybe quantum physics will turn out to be the key. But I don’t expect to see it in my lifetime.
I think some of you here seem to suffer from a lack of respect for the sheer scale of what exponential growth can accomplish. As I have said repeatedly, I doubt the premise that the growth will remain exponential. But if it did all of the things you present as problems would be solved trivially. Mapping a brain atom-by-atom? Nanobots. Simulating that brain at 100x speed? Done. Run evolutionary algorithms to tackle AI problems that would normally be inconceivable due to combinatorics? Easy. And many of the problems’ solutions would be facilitated by the technology – you don’t need sentience in order for technology to vastly increase our ability to understand the brain and help make tools to study it and further our understanding. It’s a bootstrapping process, and if you don’t get that, I think you are still missing the point entirely.
I think you’re overestimating (or misunderstanding) the power of simulation. IIRC, you’re a physicist, so please understand that I don’t mean to insult you or your understanding. But it does seem that way to me…to wit: “simulation” implies a model, which is inherently a simplification of that being modeled. There’s an asymtotic upper bound on these types of simulation (i.e., physical processes); reality, in its full complement, cannot be simulated.
Of course, sentience doesn’t necessarily require such detailed simulation (if it requires simulation at all). We don’t know. Speaking as an ex-AI researcher, the issue really is the software (or more accurately, the conceptual foundations of the software), as Minsky and Sloman (among others) have been saying for quite some time.
Heinlein write about it even earlier, in an essay back in the 50s or 60s IIRC. He just didn’t use the word “Singularity”. He spoke of exponential growth in knowledge, complete with a graph showing where progress goes straight up; the sort of thing you see in the typical article of the Singularity.
I disagree, we’ve made huge progress. The problem is that we grossly underestimated back then how much progress there was that needed to be made. We thought Mount Everest was a speed bump, basically.
A true atom by atom simulation should work without understanding how it works, that’s the whole advantage of such a simulation. Turn it on and it’ll start functioning on its own because that’s what the brain you are simulating is built to do. Not that I think such a thing is likely to be necessary, even for a pure human-simulation method of AI I expect we can pare away a huge number of unnecessary details. It seems unlikely that every last hydrogen atom is involved in the process of thinking.
Yes, simulation requires a model. We have a model in physics that will fit the bill. It’s the Standard Model, and at the energy scale of the physics inside the human brain, its accuracy is “good enough”, to put it mildly. So I don’t understand or agree with your criticism of the use of a “model”. In this case, the model we have is so accurate that with regard to the quantum mechanics of the human brain it can be regarded for all intents and purposes as indistinguishable from “reality.”
I’m wondering if Nintendo will ctach up with Sony and Microsoft by 2045 or whether they’ll still be relying on gimmicky controllers.