Computer Singularity by 2045?

I’m addressing some reasons why unlimited exponential technological growth is unrealistic.

In your human brain to AI approach, you’ve created a circular problem. To develop a virtual brain from a human model, you first need a virtual world and community of virtual beings in order to develop your virtual brain into a virtual functioning adult instead of a virtual vegtable. But in order to create that virtual world, you need an AI of beyond human intelligence to create it, as it is beyond our capabilities.

Of course, you could implant your virtual brain into a mechanical body and use human culture as it’s context. But then you’ve tied your creation to human limitations, and any developmental processes will happen at a snails pace. No exponential growth, because we can’t provide the creation with the context it needs in order to grow. We can’t simply tweak it’s brain to work better, because we don’t understand how it works.

And you seem to not understand what it can’t accomplish. We don’t know if the biggest, most complex computer we could ever build will ever develop true self-awareness and complex thought, because we don’t know how WE did it. We don’t even really know what it is. And as many people have pointed out, developing strong AI is a software problem, and we’re experiencing nothing like the exponential growth of hardware in the software field.

In addition, human brains are very different than typical computers. They’re analog, not digital. They change based on inputs from the endocrine system. They’re adaptive. We still only have the foggiest notion of how memory is stored and how the brain processes information. We’ve figured out where some inputs and ouputs are - we know that if you put a voltage on area ‘X’ of the brain the person will smell something or respond as if the eye is seeing something. But we don’t know how it does this. We don’t know what a ‘smell’ is in the brain, where it’s stored, or what goes on to process it.

And as we learn more, it starts to look more and more complicated.

We may get there one day. I recommend looking at the Blue Brain Project web site - they’re trying to build simulated brains by coming up with a microcircuit that exactly replicates the functioning of a neuron, then building stacks of them into ‘neocortical columns’.

The stack they’ve currently built contains 10,000 artificial ‘neurons’. The thing generates immense reams of data. It takes 8192 processors running in parallel to model this thing. 10,000 neurons can create trillions of connections.

A housefly has a brain of about 100,000 neurons. The human brain has about 10^11 neurons. The number of connections goes up exponentially as you add neurons. The amount of data that’s flying around in your brain is truly staggering.

Neurons themselves appear to be little computers in their own right. Different types respond to different signals. Their responses differ based on intensity of the signal, time durations, etc. They have multiple inputs that do different things. We don’t really understand why or what they’re doing - the Blue Brain Project is designed to help us try to answer these questions. We don’t know yet how intractable these problems are, and we don’t know if just scaling up the size will help us understand.

Now, it’s possible that if we simply build up a good reconstruction of the brain digitally and start feeding inputs into it, it will evolve and become increasingly complex. But we have no idea if that will translate into anything like human intelligence. There are animals with big brains that don’t act very intelligent, and animals with smaller brains that exhibit very complex behavior.

There’s just too much to know yet to make any kind of reasonable prediction as to when the ‘singularity’ will arrive - if ever.

I am beyond my knowledge here, so even though that doesn’t seem correct to me, I cannot argue the point and will have to take your word for it.

I think we can separate the two things – technological growth on the one hand – and AI on the other. I think unlimited technological growth is unlikely. But it’s the premise I’m working with in this thread.

You are inserting “simulated human brain” into a contemporary technological context, totally ignoring the correspondingly massive exponential technological improvements that would enable, for example, besides putting a virtual brain in a mechanical body in order to culturate it, to supplement it with near-infinite and near-perfect memory, to enable it to speed-up the simulation of its cognitive processes, to “read” every book in existence in a microsecond…

Um, no. First, creating such a virtual world would be far easier than simulating a brain atom-by-atom since we wouldn’t need to make it nearly so detailed. Would you be able to tell if the world was a simulation that only went down to the size of, say, a cell? And second, simulating a brain atom by atom without understanding it means that you are going to get a copy of the person whose brain you are simulating anyway. You wouldn’t have any way of knowing what to leave out to make it a “blank” brain even if you wanted one.

No, you’d get exponential growth because it could be run much faster than human mind runs, or so the logic goes. And realistically, we’d be able to improve on a human mind at that point since we’d have direct access to its functions instead of having to guess.

Having read a bunch of Kurzweils stuff, I think that whilst he makes a lot of good points, he is seriously underestimating the massive social resistance to scary technology and failing to grasp that biological research is constrained from advancing at the same rate as technological research because of regulatory issues.

The thing is though, if the rate of technological progress keeps going at even a fraction of the current rate then, probably well within the next century, it is going to be absolutely incompatible with our current picture of society.

We already have brain implants that allow fine computer control, have gone in a couple of decades from pacman to CGI like this, people can now buy affordable home 3D printing and you thought that music copying caused legal drama. This week a computer won Jeopardy.

Sooner or later the general public is going to wake up to the ludicrous untapped potential of the technology we already have and they are going to start getting very, very scared. That’s when things will get interesting.

In the proverbial sense.

OK. You are making it clear to me that you don’t have enough respect for exponential growth. The reason why I specifically mentioned, as an example, simulating a brain by brute force was because doing such a thing is not a software problem. It’s a computational problem, a memory storage problem, and a technology problem.

It’s a computational problem because it takes a lot of computational resources to accurately simulate a physical system. But it can be done. Note that physical systems are analog. The fact that typical computers are digital is irrelevant – they can be used to simulate analog systems to arbitrary accuracy given an arbitrary resource of CPU cycles. It’s a memory storage problem because storing the positions and momenta of the 10^N of atoms in a brain takes a lot of memory. It’s a technology problem because the structure of the brain needs to be mapped. It’s not a software problem.

[QUOTE=Der Trihs]
A true atom by atom simulation should work without understanding how it works, that’s the whole advantage of such a simulation. Turn it on and it’ll start functioning on its own because that’s what the brain you are simulating is built to do. Not that I think such a thing is likely to be necessary, even for a pure human-simulation method of AI I expect we can pare away a huge number of unnecessary details. It seems unlikely that every last hydrogen atom is involved in the process of thinking.
[/QUOTE]

How would you, er, turn it on? I mean, a simulation of a brain down to the last atom would simply be a static construct. How would you get it to start processing data, interacting, doing all the other stuff that a brain does? Even assuming you could build such a construct to even a reasonably comparative level of detail, I’m not seeing how that would work. Our brains have evolved to interact with the sensory inputs we have, and to process information in a certain way. In order to get the brain to do anything we’d have to understand all of the inputs and outputs to actually get it to operate…at least that’s how it seems to me.

If you put a brain in a box, even if you give it all of the nutrients to keep it ‘alive’, if you actually want to interact with it you need to figure out how to at a minimum give it a ‘mouth’ to talk out of and some ‘ears’ to hear with.

-XT

Please give some examples of fundamental progress in AI. They’ve done a great job in solving some of the specific problems they were looking at - like chess, solving equations, even playing Jeopardy now. We could build a system which did all these things very well (and vacuum your floors too) but which would be incapable of generalizing or even coming close to passing the Turing Test. They’ve demonstrated that intelligence is not a collection of heuristics.

Those who have written simulators and simulation models know the secret is in building in the right set of primitives. For the brain it is neurons and the connections between them. Atom by atom simulation is not even close to being required - we don’t do that even for IC simulation at the finest level. And I agree that if you build it right it would start thinking when you turned it on.

It would turn itself on, brains are built to do that. Your brain does that every morning when you wake up.

That’s much simpler than simulating a brain since we already know where the input/outputs are, know what they are sensing, and are already well along in the task of constructing artificial senses. By the time we can seriously try to simulate a brain, adding in the senses will be a minor project, a matter of hooking together already existing software and hardware with what already exists.

If it was built on the model of the human brain it wouldn’t be able to process every book in existence. And even if it could, that doesn’t mean it would understand it all. To quote Homer Simpson, “every time I learn something new it pushes something old out of my brain”.

How would “near-inifinite and near-perfect” memeory be achieved when we don’t understand how human memory works?

I’m explaining why the creation of AI is not a bootstrapping process. I understand what you mean by “premise: exponential technological growth”, but to me that statement is equivalent to “premise: magic”.

The virtual world is the comparatively trivial part. The problem is creating a community of virtual beings to give the creation a social context. See my previous comments about learning.

Using the human brain as a basis for an AI, we wouldn’t know how to do this, as we wouldn’t understand it’s workings on anything more than the crudest level. We might know the connection of every neuron, but that wouldn’t mean we would understand how a concept such as “mauve” or “sombre” is organised, or how one relates to the other.

Another vote for emergent AI from supercomputer projects being the most likely route to sentience. And that there’s no telling when it might happen (and little reason to think it’s imminent).


I just wanted to add that upthread many have implied that a software simulation of the brain would be little more use than a real brain. This is a flawed analysis.

With a simulated brain, we could perform experiments. And we could restore the brain back to a previous state as many times as we wish.
If this seems slow, we can clone this brain and perform many experiments concurrently.

And if doing experiments falls foul of ethical concerns, then how about just collecting data?
The popular perception is that we can pop a person in an MRI and know what every neuron in their brain is doing. In reality, our picture is very crude, and still requires interpretation. And functional scanning techniques are necessarily lower resolution and more indirect.

With a software simulation, we could collect reams of data on every neuron. It would be a complete game-changer

Dude, the first computer I worked on had 64K of RAM and cost several million dollars. We connected to it with a 110 baud acoustic coupler, driving an ASR-33 teletypewriter. I’ve owned an example of every generation of computers starting from the time of the Apple II. I know all about exponential growth.

What you don’t have an appreciation for is complexity. Understanding complex systems (and the ‘mind’ is most assuredly a complex system) and simulating them is not a matter of just throwing more horsepower at the problem.

So you say, but we don’t know enough about the brain to know that. For example, a child’s brain creates neurons and connections at a dizzying rate as the child experiences the world. We don’t have a clue as to the algorithms that drive that process. We do know that the process can be affected by myriad chemicals interacting with the brain, ‘ion channels’ that behave in complex ways, stem cells that that morph into different types of neurons based on rules we don’t understand, etc. The brain is intricately connected to millions of nerves, its behavior is affected by the endocrine system, and so it goes. The ‘software’ of the brain may be encoded in the hardware, but it’s also contained in constantly changing electrical fields that exist all through the brain . Have a look at a functional MRI to see the complexity of state change that goes on in the brain with even the simplest of inputs. That’s software. We don’t understand it. We don’t know how to create it.

For example, there was a famous incident where a man had a large rod blasted into his brain in an accident, and lost a significant portion of his brain matter - without much in the way of ill effects. Other people have lost smaller portions and suffered severe memory loss. Others have lost similar portions of their brain and retained all their memories but developed anger issues. And so on. We can’t even tell you what a ‘memory’ is in the brain, where it’s stored, how it’s constructed, or how we retrieve it.

I understand the argument you’re making - you don’t need to know how to write Microsoft word if you can take a computer and simply replicate it bit by bit - you’ll be replicating the software along with the hardware. And perhaps that will be true with the brain, for suitable resolutions of replication. But we really don’t know that. There’s much about the brain that’s still a complete mystery to us. It’s tissue, not silicon. Even the smallest elements of it respond in sometimes amazingly complex and mysterious ways.

A computer simulation is only as good as the information that goes into creating it. Simulations abstract away a lot of complexity and unknowns, and the quality of the simulation depends on how good the authors were at choosing what is important and what isn’t. The ‘Blue Brain’ project is a good attempt at simulating brain structures - but right now the purpose of it isn’t to build an actual brain, but to compare the simulated brain’s output to a real one to figure out what’s different and therefore to get a handle on what we do not know.

You cannot make estimates about the future when you lack basic facts, and I don’t care how much computing horsepower you have, you cannot perfectly simulate a physical system unless you understand everything necessary to simulate it. There’s even theorizing right now that the brain may use quantum entanglement as part of the process of ‘consciousness’. That gives you some idea of how little we know about the actual process. If quantum effects were really involved, then even reconstruction down to the cellular level isn’t sufficient.

So you make more than one simulated brain; I don’t see the problem here.

Not a problem since we wouldn’t be operating on that level of abstraction in the first place. We’d just run it faster on the hardware/base simulation level. We’d be working with simulated atoms or cells, not concepts.

You continue to ignore what I have repeatedly emphasized: brute force. By “brute force simulation”, I mean: throwing away nothing. Ignoring no complexity. Just simulating the known laws of physics.

Quantum mechanics can be simulated to arbitrary accuracy in a low-energy system like the brain. Not a problem with near-infinite computational resources.

By the way, that is not an example of the sort of exponential growth we are talking about. The beginning of an exponential curve is nearly flat. Currently we are on the rise. But eventually you reach the cliff, where computational power is doubling every fifth of a second, and then doubling every nanosecond, …, this is the exponential growth I suspect you have not fully contemplated.

(Again: I do not think such exponential growth is sustainable – but it’s the premise here)

(* computational power relative to current computational power)

I haven’t contemplated it because that’s not the model for exponential growth of computing power. You’re talking about cubic growth where the rate of change itself is growing exponentially.

Moore’s law says that computing power doubles roughly every 18 months. Moore’s law does not state that the interval between doublings will get shorter with each generation.

We don’t even think we’ll be able to maintain Moore’s law much longer, let alone increase its rate exponentially. Perhaps you’re thinking of what would theoretically happen after we reach the singularity.

You also seem to have no appreciation for how much we still have to learn about how the complex systems around us work. You toss off phrase like this:

You really seem to think that we know so much about the natural world that all we need is some more computing power and we can just simulate it perfectly. You don’t seem to understand just how far away we are from being able to do anything of the sort. And that has nothing to do with computing power - it has to do with the complexity of the world around us and the staggering number of hidden interactions and interdependencies that affect the behaviors we see, which we’re still learning about.

In addition, you seem to assume that the computing power required to solve complex problems goes up linearly, so that exponential growth in computer power translates into the ability to solve exponentially difficult problems. If so, you might want to read up on the P Vs NP problem.

I think what iamnotbatman is saying is that you can, in fact, make it a bootstrapping process from relatively simple programming fundamentals, if you have a magic computer with no effective limitations on its computing speed. Maybe this can be illustrated with a biological simulation.

One such bootstrapping process could maybe start from two basic programmable items: 1) Can you program a solar-system-like universe with physics like ours? 2) Can you program a few basic organisms into that environment, complete with mutation and resource limitations? Together, that provides for evolution by natural selection. If you’ve got those two steps, plus no effective limit on computing power, then all you have to do is run the simulation and wait. We can be fairly confident that with enough processor cycles (with enough billions of years of computer evolution), the magic computer will spit out an intelligent species. The initial software requirements seem doable on that score.

That is a splendid example of the fallacy of the exponential part of the S curve. You plan for continuous exponential growth, and then the bottom drops out as you hit the flat part, and the smart money goes to the curve just beginning.
Sam’s comment on the time between doubling is correct also. That has already been adjusted once, and is actually getting longer as people can’t afford to build new fabs for the next process node. I was at a Sematech meeting with the real experts, and a lot of companies want to slow things down, and are not seeing the advantage of the next generation.

BTW, Gordon Bell, I think, extrapolated Moore’s Law backwards, and it fit quite nicely. It is possible that it will continue but jump onto a non-silicon solution. Or, we might just hit the flat part of the curve. With faster communications networks, and more computing power for nearly nothing than almost anyone needs, we can do more of the networked computation like SETI@Home. If every PC and smartphone in the country simulates a thousand or a million neurons, we could simulate the brain. Not in real time, but that is not required, since you’d want to predefine a set of inputs for our simulated brain to experience.