My own personal experience suggests otherwise. I’ve saved a significant amount of space whenever I compress a drive. There are just a lot of uncompressed files on the file system. I’ve never actually seen a file take up more space, due to the fact that compressed files don’t have to align to the file system block values. You can get a lot of small files into that same 4KB that every single file takes up when uncompressed.
I don’t currently compress by default on my SSD, mostly due to valuing longevity* over storage space on it. (With a smaller SSD, I’d probably have some folders compressed by default.) But on HDDs, I compress by default now. It even speeds things up due to less data being read off the disk, just like how the Internet tends to send compressed files back and forth.
Sure, it doesn’t help more than a few bytes for things that are already compressed, but the cost is negligible on spinning drives and modern CPUs.
*Sure, reading files on an SSD doesn’t really cause significant wear. But if the file is one that gets edited often, compression means that the entire file must be rewritten every time. And there does seem to be some real world data suggesting this can reduce the longevity of drives, though it is much less of a problem with modern wear-leveling and disk-caching techniques.
Something like the mentioned lz4, and probably zstd, are going to abort compressing quickly on a non-compressible block, such as a block belonging to an MP3. That means the CPU won’t waste very many cycles attempting to compress non-compressible data.
That sort of compression is different than “tail packing” where a file system might put several small files, or the tails of several larger files, into what would otherwise be empty space after a file.
With all things file system, it is very difficult to make any bright line rules. In some cases compression may be a big win in space and speed, and in others it may hurt. The best, and possibly only, answer is to test and benchmark particular use cases on particular hardware.
I recall a research paper from back in the day (mid-80’s) when disk space was expensive, where the concept was implemented to subdivide blocks on a disk - i.e. if the tail end of a file left a block half-empty or one-quarter empty, the space was flagged and the remaining half or quarter would be used by other files. i.e. subdivide blocks if necessary.