Computer Singularity by 2045?

No. The Three-Body problem is chaotic and can only be simulated to a finite degree of approximation. At some point your model will predict something that won’t happen in real life, such as whether an asteroid collides with a planet or not.

As long as computers only know if, then, else. I do not see how AI will ever be possible, simulated to where I cannot tell the difference maybe but free real and life? Noway

So long as neurons only know firing/not firing, I do not see how organic intelligence will ever be possible.

In other words, judging the capabilities of something according to the limits of its simplest components is foolish.

OK, so bank on improvements in fMRI technology, then. Make a better MRI machine that you can fit into a comfortable hat, and that gets neuron-level resolution. You’re going to need that anyway, to make the brain map you use for your simulation.

As for exponential growth being enough, you’re thinking in terms of simulating 100 billion neurons. 100 billion doesn’t seem like such a large number… But 100 billion neurons have about 5 sextillion possible connections between them: That’s the number you simulation needs to be able to deal with. Even if we take a modern supercomputer with a terrabyte of memory, and assume that you only need one bit for each connection, and assume that Moore’s law continues at its original rate of doubling every 18 months, instead of the current rate of doubling every two years, it’d still take 45 years to reach that point. And that’s just for one brain, and that’s the equivalent of a high-end supercomputer to do it.

I sat next to his niece on an airplane once about 10 years ago, she seemed to treat him like “the crazy old uncle that the family has to bear”.

Chronos: I agree, but you’re actually understating the problem because each connection is not the same as a wire connection between two digital bits. The connections themselves seem to have their own behaviors. There’s complex analog signaling going on.

I know what Moore’s law is. That is why I added relative to current computing power. All I was doing was describing the exponential-rise side of the standard exponential curve.

Again, as I have had to repeat over and over with you somehow not listening: I don’t think the rate will continue exponentially.

I know what we do and do not know about the natural world. I work with the Standard Model for a living. I can tell you unambiguously we know enough about the natural world to accurately simulate it at low energies, if we had enough computing power.

We are not talking about P vs NP here (what a non-sequiter!), we are talking about N-body simulation problems where the growth of computational power required is combinatoric. Sterling’s approximation tells us that the required computing power relative to an exponential growth curve only rises as the Sqrt(N).

The physical dynamics of the Three-body problem can be simulated perfectly well. The problem of sensitivity due to initial conditions merely implies that if we were to simulate two human brains, at some point the two would deviate in their decision-making. It does not at all imply we wouldn’t be able to simulate them in the first place.

Frankly, I suspect the approach least likely to succeed for creating AI is simulating a human brain. It is analogous to trying to build a vehicle by simulating a horse or a bird. Instead of building a wagon or an airplane.

If we want to come up with something that works similar to the human brain, then it probably easier to come up with a pure brain animal that isn’t much more than pure neural tissue and nerve tissue to handle I/O. The brainimals would be specialized single purpose devices, to do things like driving a car. I suspect something no larger than a squirrel brain would be enough, since it wouldn’t have to do all the other stuff that a squirrel has to do.

This assumes we could duplicate the state of an adult (developed) mind. The up-thread discussion was about computational simulation of a brain “grown” from a DNA recipe and simulated biochemistry.

Sure, no doubt it would lead to a much better understanding of the brain, but it would still be a very small step towards creating a greater-than-human intelligence, which is the real goal. There is as much difference as measuring the state of the brain and understanding how it works as there is between reading a genome and understanding how an organism develops. The latter is several orders of magnitude more complex.

Even with many simulated brains at our disposal, any experiments we could perform would be very crude and slow. Once you make a change, you have to see how that plays out over a developmental period, and make some attempt to measure what effect it has had.

The problem is you’d have created a population of infants. How do you train these virtual entities?

Sure, I understand that. I have two problems with it as an idea:

  • It assumes magic (infinite computing resources).
  • It assumes further magic (some way of programming the initial state).

Look at it this way. Any simulation at the level of detail iamnotbatman is proposing will require much more physical resources than building the actual physical model itself. It only works as a thought experiment.

I’m not completely pessimistic about the possibility of developing AI, but it’s not something I expect to see significant progress on in my life time. I think it’s entirely possible we may eventually be able to construct a computer of greater complexity than a human brain. We have a theoretical approach to mimic brain function, neural nets, which is vastly more efficient than simulating at the atomic level. However, the big problem is one of learning.

Training neural nets is incredibly difficult. To take a real-world example, the US Air Force tried to train one to identify tanks in reconnaissance photos. The initial results were very promising, it learned to distinguish between tank and no tank in its training set with almost complete accuracy. However, when they tried it on a new set of photos it was completely useless. They had made a basic mistake in selecting their training data. All their pictures of tanks were taken on a sunny day, while the tankless pictures were taken on an overcast day. Much as I like the concept of tanks that only come out to play when it’s sunny, it isn’t standard military doctrine. That’s the problem with training neural nets, the features they identify may not be the ones that are meaningful in a larger context.

Don’t underestimate the difficulty of developing AI. It took hundreds of millions of years for human-level intelligence to arise in a planet-wide experiment. Possibly one day we’ll know enough to take some significant shortcuts, but there is nothing inevitable about it.

I know you are a smart and sensible guy, but it’s really getting to me that you continue to ignore the fact that I have repeatedly and emphatically said that I don’t think infinite computing resources are likely; it’s just the premise I’m working with: that the exponential trend keeps up. Initially it seemed you were arguing that, given the above premise, AI would not be much more likely due to software problems that are separable from technological innovation. But now you seem to have backtracked, arguing that the exponential trend is unlikely to continue. I agree it’s probably not going to continue. The only point I am arguing is that if it does, we will have practically infinite computing resources, and technically an AI will be possible through brute-force simulation.

And with practically infinite computing resources, those hundreds of millions of years can be simulated in the blink of an eye.

Yes, I understand that is your premise, honestly I’m not ignoring it. My frustration is that when I argue for a problem with a particular bit of magic you’ve proposed, you solve it with further magic. You can solve any problem by introducing enough magic. My position is that it’s not a useful premise.

A question for you please, do you agree with this statement? The resources required to run a simulated environment to the level of accuracy you have proposed would require more resources than building the actual physical environment.

If you agree with the above, you’ll see why unlimited exponential technological growth is an impossibility. There are physical limits on how quickly different parts of the system can communicate with each other. You can only simulate a physical model at faster than real-world speeds by simplifying it.

I believe both that vast computational power is only a small part of solving the AI problem (see my previous post), and that sustained exponentially increasing technological growth is an impossibility.

I believe it’s not just very unlikely, but actually impossible. That’s where we differ.

If your simulator is actually more complex than the system you are simulating, you can’t do this.

If you have infinite computing resources, why not go the whole hog and start with a simulated big bang? But that has no basis in physical reality.

If that’s the case then I think you’re right: no model is perfect, and modelling DNA and biochemistry in an attempt to grow a brain is so indirect I doubt it would work, but:

The point is, when it comes to the brain, we aren’t even at the genome point yet. We’re still at the “tinkering” point.
(That’s not to say we don’t understand a great deal, but we have lots of isolated observations, no overall theory, and huge unknowns).

I meant much more immediate experiments than perhaps what you have in mind here. There are significant low-level questions that could be solved very quickly if we had software reminiscent of a living brain.

And what was wrong with my other suggestion: that even without experiments we can just collect reams of data that are impossible to collect with real, live brains?

I do happen to believe that there will be a singularity of sorts once, say, we can fairly accurately simulate the brain of a frog. 10 minutes of running such a simulation could probably create more raw data on biological brains than we’ve ever gathered.

Modern processors have upwards of a billion transistors, and easily a hundred million logic gates. If we had to simulate, or even model, each possible connection we’d never be able to do it. There is also concern about modeling crosstalk or shorts between two lines. We can’t do that if we try to look at possible shorts between any two lines. What we really do look at physically possible connections. You can do the same thing for neurons. If we can scan the brain we’d only worry about the actual connections and establish new ones as needed.

A lot of current technology would seem like “magic” to educated people who lived less than a century ago. And we haven’t even gotten to the fast-moving part of the exponential curve. My own frustration is it seems like you suffer from a lack of imagination, or somehow a lack of familiarity with the potential of the bootstrapping, problem-solving power of exponential technological growth. I suggested the possibility of “brute-force simulation” of human consciousness, and you counter with “how are you going to culturate it?” As though we wouldn’t have made any technological progress in any area other than pure computation. Better fMRI, nano probes, simulation, faster-moving and building-upon-itself basic scientific progress aided by exponentially-increased technological tools, and vast possibilities I can only imagine, all will make significant progress on the relatively simple problem (and already nearly functional) of brain-eye-ear interfaces.

No I do not agree. Machines that scan physical environments can automatically construct digital environments for us. The advantage of having a digital environment is that it can be run many times faster than a physical environment. Additionally, information can be read out from the digital environment (such as internal brain states) that are ordinarily inaccessible in the physical environment. Furthermore, in the digital environment, changes can be made, and situations can be re-run over and over again with different initial conditions etc. Even a multitude of copies can be run simultaneously and evolutionary algorithms employed. The possibilities are endless.

So this is what you are getting at. First of all, of course some simplification must be made. For example, there is no need to include the effect of neutrinos in any of our calculations. There is probably no need to include the Strong Force. Probably all that is needed is basic QED. But these simplifications have no physical effect on what we are trying to simulate, just in the same way as if you are trying to simulate an Apollo moon mission you don’t need to take into account the coupling of the Top quark to the Higgs Boson (you only need Newton’s Laws in that case). So the fact that simplifications must be made is irrelevant. Simplifications can be made that have no measurable effect on anything we want to simulate on the time scales and energy scales we are interested in.

So yes, you can simulate a physical model faster than real-world speeds by simplifying it – without it affecting any of the measurable results.

I can say with nearly 100% certainty that the first simulation model of a brain will not run at real world speeds. There is no reason for it to. You feed it with controlled stimuli at whatever speed it is running at. You instrument it, you debug it, and then you collect data about the parts slowing down your simulation, you collect data about what things you modeled in excruciating detail can be modeled at a higher, faster level, and you think about building special purpose hardware to make the simulation run faster.
No simulation of a computer runs anywhere near the speed the actual computer will run at - and the simulations work just fine.

Yes we have. The fast-moving part of the exponential curve is right this moment. Of course, a century ago, the fast-moving part was also right that moment, and a century from now 2011 will be the slow part, and the fast part will be right that moment.

I get the impression you don’t actually understand what an exponential growth is.

From one physicist to another: really? I sure as hell hope I understand what an exponential is.

Overlay an exponential curve on a linear curve (for example). See here. You will notice that there is a region where the linear curve, or furthermore a geometric curve, will overtake the exponential curve. Yes, the exponential curve is increasing more slowly than linear. In other words there is a region in which, despite the exponential curve having a constant doubling rate, it’s progress is “slow” by any absolute measure. An example of this is the fact that very little technological innovation was made between 10000BC and 5000BC. It was a period of little growth because it was near the beginning of the exponential curve. As you know, the rate of change of the exponential is proportional to the value of the function itself. So when the value of the function itself is small, the absolute rate of change is small as well. Yes, there is a “slow moving” part of an exponential. You will likewise notice there is a “fast moving” part: where the exponential overtakes the linear and geometric increase. The doubling time is the same, but the absolute rate of change is enormously higher.

yes, d/dx(e^x)|x=x1 < d/dx(e^x)|x=x2, if x1<x2

Isn’t a lot of this just splitting hairs though? The end of history will probably not happen in 30 years time, and it might not happen in the next century, but is there anyone in this thread willing to give the human race (at least in a form that we would still consider human) much longer than that?

Less than 15 years ago I used the internet for the first time, today it’s inescapable for me. 30 years is a long time when it comes to technology.

E. M. Forster showed what a real changed introduced by something like the Internet in The Machine Stops, written in 1909.

35 years ago I was a PLATO user. PLATO had email, it had instant messages, it had boards, it had computer assisted learning (its purpose,) it had touch screen terminals (which we only have on phones and tablets, pretty much, though they are coming) it had multiplayer networked games, it had an online newspaper, and it even had a tiny bit of porn hidden from the high sheriffs.
We’re still the same old people we were back then.