AI Intelligence, rate of growth of in a given generation of AI

I predict that we will instead end up with a sea of special purpose AIs which are vastly superior to humans at their designed function but not general intelligences or with any conceivable way to become general intelligences.

If we do end up making a general AI, I suspect it will be done by copying human brain architecture, that the AIs so created will be of human-level intelligence, will require sensory inputs similar to humans’, and that neither we nor they will be able to explain exactly how it works. I am prepared to be wrong about this, but it does seem to me that any intelligence may simply be unable to “read its own code”, much less recompile it without dying in the process.

How is it at all similar?

If extended mind theory is right (and I think there is a lot to be said for it myself), your regular Windows machine of iPhone is (often) part of your extended mind, but this has nothing to do with AI.

AI’s are supposed to be autonomous intelligences, not mere human tools like notebooks or smartphones. and I would argue that one of the reasons the AI project has been stalled for so long is because most researchers have taken it for granted that the mind is all inside (the brain, or in an AI, the computer).

I do think AI is possible in principle, but will be realized in robots, nor mere computers. A rich interaction with the world is a key part of intelligence.

The singularity is another matter again, however. I believe it is based on a false notion of intelligence as a simple, smoothly ascending scale. It is not. Most likely even the most perfected AI’s will not be able to be significantly more intelligent than humans, and will have no ability than humans do to design something beyond itself.

I know it’s not super telling, but the standard AI textbook is only on its 3rd edition now, and it was on its 1st edition in 1995 when I took AI in college.

Apparently the field isn’t exactly changing to the point where the standard textbooks are being revised more often than once every 6 years or so.

And people are still using calculus books from the '50s, and half of undergraduate physics barely gets past the mid-late 1800s in terms of scientific progress. I don’t think entry-level survey textbooks (mad respect to Russell and Norvig of course) are a good marker of the rate of change in the advanced parts of a field. A lot of work has been done in Boltzmann deep learning, transfer learning, and reinforcement learning since then. That book didn’t exactly have the most comprehensive analysis of kernel methods for K-means clustering or support vector machines, even if it covered both of those algorithms briefly.

The basic equations are the same, but a ton of work has been done on the various optimization problems that can be plugged in there. It’s kind of like saying math hasn’t changed much since we still use y=f(x); well, sure, but f(x) can be a hell of a lot of very interesting, new, and exciting things.

Missed edit: To give a semi-AI example, inverse kinematics for robotics (the problem of saying "given a limb and a system of joints, I want the hand to point here) is literally a problem of solving the equation Je = Δθ; a non-vector version of which you learn to solve in middle school. Seriously, that’s all. It’s also REALLY HARD and there are at least 4 well-known methods of doing this, and countless variations and optimizations on each of them. And they all produce noticeably different results. (Some have smoother motions, some jitter more or less when the object is out of reach, some deal with self-collision and some don’t).

Let me give you some specific, hard numbers as to why there is a widespread belief that an explosion is possible.

Right now, our brains operate at about 1 kilohertz. The majority of the delay is diffusion - it takes a very long time for atoms to diffuse across even tiny synaptic clefts via brownian motion. This delay does not help our brains compute anything, and is unnecessary. The other big chunk of delay is that nerve fibers run at a few hundred m/s, compared to about half the speed of light for photons in an optical fiber.

If we could build an entire human brain out of synthetic components, much faster speeds are possible. High speed emulation would require dedicated circuitry to take the place of every single synapase. I did a rough estimate the other day and got ~60,000 square meters of chip area, or about 5% of the world’s annual production of silicon wafers. This would be a computer orders of magnitude larger than the biggest computer ever built.

Smaller computers without sufficient memory to emulate the major functions of a human mind aren’t likely to be sentient. We just haven’t built a computer big enough to even realistically expect AI from it, or scanned an entire human brain to program such a computer. That’s the actual truth. It’s not a matter of missed promises or intelligence being so hard, it’s a matter of scale.

Anyways, due to reasons too long for this post, I estimate that a high speed emulation would be about 1k to 1 million times faster than the brain tissue inside our skulls.

So if you took one of the chief project engineers for the brain emulation project, euthanized and scanned him/her in, and then emulated them at such an immense speedup, you would expect them to be capable of incredible feats.

Increased intelligence probably grants diminishing returns. A mind that thinks a thousand or million times faster than humans won’t be 1000/1 million times better at making decisions or inventing new technology. However, it seems very likely that these minds will still be enormously better, so many times better to make ordinary humans seem like dumb animals.

Another logical outgrowth is that if super-intelligent entities start working on advancing technology and making themselves smarter, the rate of technological growth will accelerate until it hits the limits of physical laws. Basically, it will look like this :

Humanity is currently on the left side of that graph, approaching the vertical section.
http://cdn2.hubspot.net/hub/233102/file-384803037-png/Blog_Images/s-curve_graph.png

(bolding mine)

There’s a small but important distinction here. There are few people who deny such an explosion is possible. There are more than a few people who believe such an explosion is not inevitable.

It’s no contradiction to think it’s possible without believing that it is likely or that the case for such is particularly strong.

When I took my AI class in 1975 the stuff predicted was
Chaimpionship chess playing computers
Ability to determine routes between places
Voice recognition
Visual recognition
Ability to solve complex mathematical equations

all of which have happened. As for true hard AI, I’m not sure we’re that much further along than we were then, though we are better at faking it.

BTW, back then one of the texts we used was from 1959, so the textbook lag time hasn’t changed much either.

The problem is that my arguments about performance are based on physical laws and the speed of existing chips.

So in order to conclude it is not inevitable, you would have to either conclude

  1. Physical laws are wrong
  2. It would not be profitable to build a super intelligent being
  3. The brain uses “magic” and not predictable phenomena that can be easily modeled
  4. Thinking faster doesn’t make an entity smarter
  5. Failed predictions by a few “futurists” means this technology won’t ever be developed

And so on. Most reasons you come up with to argue against it turn out to be bogus.

Or the converse argument :

A superintelligence that humans can build with straightforward engineering is impossible because nature has already built about the fastest “brain shape object” physical laws will allow. This is absurd for a huge number of reasons.

Could you expound on this?

Again, I don’t deny the possibility, but I do deny the inevitability here.

While both of these points (though I object to the word “magic” and would insert “the nature of intelligence may determine a maximum effective ‘speed’”) could be true, you offer little in the way of proof.

Again (what’s this, 3 times now?) most folks don’t deny the possibility, merely the inevitability. While you explicitly state you are addressing concerns of those who question the inevitability, you’re really instead arguing against those who deny even the possibility, i.e. an opponent of straw.

That’s actually the really interesting thing. Some people thought all those things would be achieved via a general artificial intelligence. Instead, we’re doing just fine with specialized AI and seem to be moving even further in that direction.

Well, if you’ve already conceded that an AGI that mimics the human brain, even if it’s just a part by part emulation of a once living human, is possible, then it is just a matter of whether

  1. The cost to do so is affordable for human civilization to pay for it
  2. The projected benefits exceed the cost

For the first part, well, this is a gigantic upfront investment. A computer that eats up 5% of the world’s production of silicon would probably cost something on the order of Intel’s annual revenue for several years just to buy/design the parts (brain emulation chips are custom ASICs suitable for this task, only. They would have radically different block layouts from conventional microprocessors). 250 billion dollars or so. Then you’d need an array of about 5000 ATLUMs (a machine for slicing brain tissue). Apparently these originally cost about $200,000 each.

A mere half billion or so, not even a line item on the budget.

Then you would need a huge army of robotic equipment to actually perform the standardized experiments to obtain precise data for neural connection coefficients. No idea what this would cost.

My guess was a trillion dollars, but it might be less. In any case, this is within the realm of possibility.

The second part :

  1. If you had a person who thought 1000 or 1 million times faster, what new capabilities would you have? Well, for one thing, big engineering projects depend on a critical path that is at least partially dependent on the thinking speed of the engineers assigned to that path. When wars are fought, no single entity can micromanage the actions of every single agents on the battlefield.

And then there’s the big one : a scan of your brain is de facto immortality. I mean, it’s philosophically a little tricky (since you died and your brain was pulped), but it’s the ultimate solution to, well, a problem faced by all human beings alive.

So this is why I jump immediately to “is it possible” and skip “if it were possible, would humans build it”. I think the answer to the second question is obvious. The only reason we won’t see an AGI sooner or later is

  1. Humans are too stupid and short sighted to make the investment to build one, ever
  2. It isn’t technically possible

There are two ways we might solve the “strong AI” problem technologically (and a whole bunch of the promising solutions are combinations of the two.)

The first is the “classical” AI mechanism: we’ll learn to break the brain down into all of its components, processes and algorithms; understand each one; and build digital equivalents of each of them. Reassemble like some sort of intellectual Legos, and viola, we’ve got an AI. The key word here is “understand.” If we can figure out how the brain works to a level sufficient to understand its processes, then this basically becomes an engineering problem as others have described: first, eliminate the biological limitations: slow transmission, massive redundancy, high “part failure”, inefficient coding, etc. Then, like any other engineering solution, iterate and improve. As Chronos points out, we’re really good at engineering improvement (particularly starting from scratch with a new field), and an AI modelled on us will be (almost by definition) just as good to begin with, and will get the benefits of each iteration of improvements immediately, rather than having to wait for it to happen randomly. In this case, the smart money is on the high rate of growth scenario, because all technology thus far has been on the high rate of growth path.

The second approach to AI is the Evolutionary one. Build a system which can iterate and change itself, and subject it to a rapidly accelerated version of evolution by natural (or unnatural, if we prefer) selection. Basically, we re-grow intelligence again, in a close digital approximation of how it developed the first time, choosing selection criteria intended to push toward intelligence. We still get advantages of technology: digital parts can be much faster and reliable than biological ones, promising paths can be “saved” and re-started, we can iterate over multiple paths in parallel, etc. And computer science has become very good at low-level optimization – being able to improve code based on mathematically provable equivalencies, without the need to understand the code as a whole. You can feed an arbitrary piece of code into an optimizer and usually come out with a better piece of code that’s provably identical in function to the first. So by our evolutionary process, we can probably build an “intelligent” brain that’s much more efficient than our own, but comparable in function (or potentially extremely alien, it’s hardly a given that “our sort” of intelligence is the only kind there is, particularly since it evolved to service biological needs that a computer doesn’t have). In this case, though, those “in-place” optimizations have limits (otherwise you could optimize every application down to 1 byte by repetition), and they’re generally nothing close to exponential. And since the new intelligence presumably isn’t a lot smarter than we are, it doesn’t have any real advantage in iteration over us. Since we haven’t achieved understanding of what we built in this scenario, it’s not clear what route could be used to achieve exponential growth, except maybe in speed as computer hardware gets better.

Of course, achieving digital evolved intelligence would give us two different “implementations” of intelligence to look at, one of which was implemented using a base level we completely understand, so the second form of AI could give us a leg up on achieving the first form, but that’s far from certain.

Or a combination.

As we continue to build up solutions to specific problems, those building blocks can be used with an evolutionary process to build the next level up.

From an energy perspective, the brain is pretty efficient. It might take a long time to master the type of technology that allows us the kind of computation our brain performs for the same amount of energy.

The idea that all we need to do to build an AI is slice up someone’s brain and simulate it neuron by neuron is pretty naive. I mean, yeah, we can slice up someone’s brain. We could map all the neurons. But how does that help us emulate the brain? I mean, the neurons don’t just sit there. They’re all in motion.

If we get the behavior of each neuron even a little bit wrong, we don’t get a simulated human brain, we get a really detailed static map of a human brain.

Also, note the postulated pathway. We build this gigantic network of processors to simulate the brain, and it can simulate the brain thousands of times faster than the actual human brain. Well, why can’t we simulate the brain at 1/1000th the speed, with 1/1000th of the processing power?

Of course if we can simulate a brain even at 1/1000th of the speed of a real human brain, then we just need to spin it up 1000 times to match human speeds, and another X times to get superhuman speeds. But we don’t have the first thing.

This is not an engineering problem, where all we need to do is smoosh together a giant pile of money and computers and we’ll get results. It’s a question of not understanding what the problems even are.

This is true; a naive mapping is likely to just give us a really good model of an epileptic seizure.

We won’t know for sure if it is possible until humans build it, will we?
Now, I absolutely buy into the notion that there is nothing supernatural or spooky going on in the brain. But there are problems you haven’t covered:
What if existing computer architectures and the way digital logic works is not good for implementing AI? We know how to build synchronous designs. We’ve been working on asynchronous design methods for at least 40 years and have gotten nowhere. If you need to be asynchronous (like our brains are) we may be in trouble for quite a long time.
People assume that when an AI turns on it will be super brilliant. Chances are due to it pushing the technology it will be very slow.
People assume that once you turn an AI on it will immediately start to design its better successor. But what if it isn’t smart in that way? Sure it can do some part of the design process, but we have automated tools for that already. (And you could not design a chip without them.) Advances are going to require creativity in architecture, process technology, and maybe lots of other things. Baking that into an AI is an even harder problem.

My bet is that the first intelligent entity we design is going to run on a simulation of a brain. Brain modeling is being funded already. We can start small by modeling the brains of worms and work our way up. There are immediate benefits to doing this in terms of health - understanding mental illness and Alzheimers, perhaps.
Like I said, we’ve done very well with AI-like applications, but have gotten almost nowhere on real understanding of the problem.
AI today seems like powered flight when the solution was building mechanical birds which flapped very fast.

AI development has been distinctly non-linear. It made great progress in leaps and bounds in the early years (60s and early 70s), with examples like decent chess programs, rudimentary language translation, and a robot that could understand natural language and respond to commands to sort blocks by inferring the order in which different types of stacked blocks needed to be manipulated to achieve the desired result. This, unfortunately, led to overly optimistic predictions about near-term potential. AI has been particularly weak in areas requiring contextual understanding, like natural language processing and translation.

But I think all this illustrates is that our technological prognostications tend to overestimate what is possible in the short term while myopically underestimating what will be achieved in the long term. It’s been a while since I read Kurzweil’s “The Age of Spiritual Machines” but I think by and large he’s still pretty much on track. His prediction of wearable computers is almost here with things like highly capable smartphones (not really "wearable, but come on!) and Google Glass. His longer term predictions are for computers to equal and then exceed human intelligence, and for the eventual melding of human and machine intelligence, not just logically but physically.

The big mistake was in thinking that the development of new heuristics and search strategies had anything to do with intelligence. Pat Winston was thrilled that Penrose lost his bet. And chess programs are far, far better today. But it was a dead end, and had nothing to do with real AI.

Do you think a manned landing to Mars is possible within 10 years from right now if the available budget were 1 trillion dollars? Also, just for the sake of making it easy, you can lose up to 10 crews before you have to give up.

This is analogous to how if you build a top tier model of the brain and it does result in seizures, you have the option of adjusting some coefficients and restarting.

That and the brain sort of has this pesky body thing it’s wired up to. It’s kind of hard to tell how much the workings of our body influence our brain. Certainly people experience mental shifts because the thymus or testicles put some chemical in the bloodstream, or experience lacks of clarity because of oxygen deprivation or lack of nutrients. The entire thing is just very systemic, and there’s a treadmill you just can’t predictably stop. So now you simulate an entire body, maybe now you need to simulate the environment around the body too.

Sure, we’ll probably get pretty good comprehensive biological simulations going eventually, but it’s difficult to tell off the bat how relevant the brain operating in a vacuum with simulated percepts is going to be to a brain that actual has subtle variations based on the fact that your spinal fluid shifts when you move.

You can always point to counterexamples, in a way, “oh, Stephen Hawking clearly has a wonderful mind, despite having no limbs. But he can still see and hear? Well, what about blind people, huh? And some people don’t feel pain so obviously pain receptors aren’t needed.” And so on, but it’s difficult to predict exactly what set of bodily systems are necessary to meaningfully simulate a brain in a way that works in a way truly comparable to one in meatspace.

I still like: