If they had something like a silicon based CPU, they might not be able to do much better than us in terms of miniaturizing the technology further. As I understand it, we’re already near the physical limits before the transistors are so close the signals start to bleed over into one another.
But, technically, the more CPUs that you have and the better an ability that you have to split problems out between them, the more data that you can process. The problem there is that you need a lot of power and a lot of heat dissipation.
If you can travel near or past the speed of light then it’s probably safe to say that you can generate so much energy and, as a side effect, cool so much material using that energy, that you could probably build and power any sized silicon computer that you could ever dream of. Really, it becomes more a question of how much value there is in increasing processing power past a certain point? If you can calculate any arbitrary NP Hard problem with an answer usable in a universe with the specific number of quarks that our universe has in a few seconds, do you really need a bigger computer?
At an upper bound, I think that we can safely say that it’s impossible to build a computer, using any technology, that can accurately simulate our own universe. After all, you have to represent every quark and photon as a series of variables - vector, spin, type, etc. You need some sort of physical entity to represent each of those - meaning that you need multiple physical objects in our universe to represent each one physical object in the universe.
And, likewise, that means that there’s an upper bound on any calculation that you could do. At some point, you’re competing with your own survival to collect the material for your supercomputer and power it. There is a limit to both of those in the universe and what you give to computer isn’t available for you.
Once you decide on a particular cubic footage and power level for your computers, such that you feel comfortable making multiple of them, you can calculate the theoretical upper limit of computation by taking how many measurable locations exist in that space and the amount of energy that you would need to spend to shift the most easily moved object that fits in that space about half the width of your cube.
I’d guess that Monero is the hardest to break.