I always understood that to be the difference in the actual file size, and whatever slop there is in the file system.
So for example, you may have a file that’s a certain size, say… 3.2 megabytes. But due to the way the hard drive/SSD stores files, it may actually take up 3.25 or 3.3 megabytes on disk. Part of this is solely due to the way that drives and file systems work - they store in “clusters”, which are basically fixed chunks that drive space is allocated with. The default for Windows NTFS is 4k, but sometimes larger cluster sizes are useful, as a bigger chunk is allocated/read at once. So database systems and other uses often use 64k cluster sizes.
What this means for files is that your file is broken up into chunks of whatever size, and stored. Any left-over space is just empty. So in the example where you have a 4k cluster size, you get a whole bunch of 4k chunks and usually one small one with a bit left over that’s empty space. But if you write a 1k file, you’re writing 1k to the cluster, and having 3k of empty space. You can imagine how that works for a 64k cluster size- lots left over. But in comparison to 4k cluster sizes, we’re talking 1/16th the number of clusters to seek and read.
All this makes a lot of difference on spinning disks, and a lot less on solid state drives- seek time is negligible, and so is read time as a result.
To use a personal example, I have a MP3 on disk that is 7.63 MB (8,002,373 bytes) file size, and 7.63 MB (8,003,584 bytes) size on disk- only 1211 bytes difference- that’s that last cluster slop I was talking about above- my cluster size is 8k, so I have 977 clusters allocated to that file (8,003,584 bytes @ 8192 per cluster), but that last cluster only has 6981 bytes written to it, leaving 1211 as unallocated space.
This adds up over the course of a bunch of files, as you can imagine.
One thing you can do is look here and see about running this to find out your cluster size:
How to: Find out NTFS partition cluster size/Block size > Blog-D without Nonsense (dannyda.com)