Comparisons about computers being good at maths and not at other stuff is really missing the point hugely. It is comparing at a vastly too low a level. Existing computers are tiny in comparison with even very small brains. They also operate in a fundamentally different manner. What we get computers to do is arguably limited, not by the computer, but by us. We craft the programs that run.
The obvious thing to do is to take the known structure and operational mechanisms of a brain and get the computer to simulate that. This is pretty much the approach being taken at the moment. You can use a very large number of conventional computer systems to do this, or you can take our existing knowledge of crafting computers, and design something more directly targeted at such a simulation. (It doesn’t need graphics acceleration for a start.) What is more important is designing communication systems that tie together what will be millions of separate computer elements.
Once you craft this, you have the problem that you have no programs to run on your super brain simulator. We really have no clue about how things work in the brain with any detail above very trivial small sets of neurons. There seems to be a bit of a cargo cult mentality in some areas, where there is some hope that a sufficiently large system might begin to exhibit useful higher level properties essentially as an emergent phenomenon.
Anyway, this is to say that a brain simulation that is at least capable of simulating all the components of an animal (including human) brain, and at a speed comparable to the operation of the natural brain is entirely reasonable. We will see it. And sooner than many expect. But we still have no clue how to program it.
But this doesn’t answer the OP’s more fundamental question. Could the thing eventually think? The simple answer is that we have no clue.
Arguments exist on all fronts. The core question is quite simply - do we have a soul? Is there a ghost in the machine, or is it just machine inside out heads? And you swiftly fall back to atheist versus theist arguments. And they are often highly sophisticated arguments, not just “God exists”, “no he doesn’t”, “yes He does”… (sounds of scuffle, muffled swearing, punches being thrown…)
Then you get the question - how would you know? The canonical answer is the Turning Test. Which isn’t all that satisfactory in many ways, but no-one has really come up with anything better. (Most people also don’t know what the proper Turing Test is, and there have been some very shoddy examples of its supposed application.)
Roger Penrose’s book The Emperor’s New Mind was a mildly controversial attempt to prove that strong AI was rubbish and machines could never think as we do, taking a tour of physics to try to bolster the thesis.
There is a significant element to the question of free will in any argument about machine intelligence, and core uncomfortable issue that if machines can think as we do, and are deterministic, it suggests that free will does not exist. This worries more than a few pundits. Given that the theologians have wrestled with this for hundreds of years without satisfactory resolution, the question of machine intelligence is not a welcome newcomer.
In the end, the GQ answer to the question is - we really don’t know, and the question is not otherwise answerable in a GQ forum.