Searle’s argument is false on logical grounds: it’s implementation-agnostic. If it’s true, then we’re not conscious, either, which is a pretty hard philosophic position to defend.
More believable arguments have been made (Roger Penrose, for example) that purely digital intelligence is impossible: basically that the brain must be a quantum computer – although again, his* The Emperor’s New Mind* is a book being written outside of his field of expertise (physics) , and has some pretty basic computer science misunderstandings in it. But if we’re including quantum computers as near-future technology (it’s not clear that usable ones are), then these arguments don’t matter, anyway.
There is an unproven assertion in Computer Science that certain types of computer systems are universal, that is: they are capable of computing anything that can be computed by any digital computer at all (given a correct program, which is always the rub). The number of instructions necessary for universality is very small, no more than three or four, depending on what you consider an instruction. All desktop computers today, for example, are easily universal, assuming you can add arbitrary amounts of storage to them. Church’s Thesis (the assertion that universal computers can compute everything computable) isn’t proven, but it’s almost universally believed.
Artificial life systems (virtual environments where virtual things are “evolved” by competition with each other for virtual resources) almost immediately show astoundingly lifelike (though not intelligent) behavior – these sorts of things aren’t hard to build at all, they’re maybe student project level.
It’s hard to tell how much size (number of processors/transitors/memory) matters: Biological brains are full of redundancy based on the messy unreliable nature of cells; a digital brain using reliable components might be able to perform the same tasks with many, many fewer “pieces.”
So I’ll make a bold (and equally unproven) claim: I suspect that the biggest of modern computers today are capable of human-class intelligence, if we only knew the program. As to how we’ll get there: I disagree that we’ll model something we know. I suspect we’ll just take the artifical life path and simulated evolution to greater and greater depth and realism, and we’ll just suddenly have intelligence one day. We probably won’t be able to explain the details of that intelligence any more than our own, but it might be easier to study.
The other problem is recognizing intelligence. Human “intelligent” behaviour is centered around reproduction, acquiring food, acquiring shelter, lengthening life, and a host of other biological imperatives that have basically squat to do with the needs of computers or robots. Given that their intelligence is aimed at an entirely different set of basic needs, expecting computers to have intelligence that’s the same as humans’ might be unrealistic (hence the arguments against the Turing Test). But to get something with equal ability to reason and create? I don’t see why that isn’t possible sooner or later – I just don’t know how to predict when.