That is to say, (lossless) compression works because of a mismatch between the expected distribution of inputs and the uniform one on bitstrings; that is, perhaps the 1 KB file consisting of alternating ones and zeros is much more likely to be given as input than some other 1 KB file, so it pays to represent the former with a smaller string and the latter with a larger one.
However, the better the first pass of compression, the more it removes precisely this mismatch, and thus the more it destroys the opportunities for meaningful lossless compression afterwards. Factor in the fact that if some files get smaller, then some must get larger, as well as the detail that one needs to record somewhere just how many iterations of compression were involved if one intends to be able to recover the original, and you can see how this can fail to be the path to ultra-compression.
As for lossy compression, besides the problems above and the fact that the lossy compressor will be designed only to meaningfully handle input of a particular kind (images or sound or something, but not just random bitstrings tossed at it), even if you did come up with a meaningful notion of iterated compression for your particular domain, you’d have to be careful about the accumulation of information loss at each iteration.
Perhaps you know all this and were getting at something subtler than just “Compress it repeatedly to make it smaller and smaller”, but I just wanted to point out the problems with the naive approach anyway, so no one gets the wrong impression.