There are two ways we might solve the “strong AI” problem technologically (and a whole bunch of the promising solutions are combinations of the two.)
The first is the “classical” AI mechanism: we’ll learn to break the brain down into all of its components, processes and algorithms; understand each one; and build digital equivalents of each of them. Reassemble like some sort of intellectual Legos, and viola, we’ve got an AI. The key word here is “understand.” If we can figure out how the brain works to a level sufficient to understand its processes, then this basically becomes an engineering problem as others have described: first, eliminate the biological limitations: slow transmission, massive redundancy, high “part failure”, inefficient coding, etc. Then, like any other engineering solution, iterate and improve. As Chronos points out, we’re really good at engineering improvement (particularly starting from scratch with a new field), and an AI modelled on us will be (almost by definition) just as good to begin with, and will get the benefits of each iteration of improvements immediately, rather than having to wait for it to happen randomly. In this case, the smart money is on the high rate of growth scenario, because all technology thus far has been on the high rate of growth path.
The second approach to AI is the Evolutionary one. Build a system which can iterate and change itself, and subject it to a rapidly accelerated version of evolution by natural (or unnatural, if we prefer) selection. Basically, we re-grow intelligence again, in a close digital approximation of how it developed the first time, choosing selection criteria intended to push toward intelligence. We still get advantages of technology: digital parts can be much faster and reliable than biological ones, promising paths can be “saved” and re-started, we can iterate over multiple paths in parallel, etc. And computer science has become very good at low-level optimization – being able to improve code based on mathematically provable equivalencies, without the need to understand the code as a whole. You can feed an arbitrary piece of code into an optimizer and usually come out with a better piece of code that’s provably identical in function to the first. So by our evolutionary process, we can probably build an “intelligent” brain that’s much more efficient than our own, but comparable in function (or potentially extremely alien, it’s hardly a given that “our sort” of intelligence is the only kind there is, particularly since it evolved to service biological needs that a computer doesn’t have). In this case, though, those “in-place” optimizations have limits (otherwise you could optimize every application down to 1 byte by repetition), and they’re generally nothing close to exponential. And since the new intelligence presumably isn’t a lot smarter than we are, it doesn’t have any real advantage in iteration over us. Since we haven’t achieved understanding of what we built in this scenario, it’s not clear what route could be used to achieve exponential growth, except maybe in speed as computer hardware gets better.
Of course, achieving digital evolved intelligence would give us two different “implementations” of intelligence to look at, one of which was implemented using a base level we completely understand, so the second form of AI could give us a leg up on achieving the first form, but that’s far from certain.