Close.
Disk management 101 -
As mentioned, data is written in blocks. The FAT table (or equivalent in other file systems) keeps a listof all sectors, whether they are used or free. There is also a link for each block inidcating either “next block” or “the end”
All free space blocks are kept in a chain (each block points to the next)
When you write a file, it gets the first free block and starts writing.
You adda “direcory entry” in the directory, or list of files. The directory entry includes the name, points to the first block.
It detaches the block from the list, and writes data on that block. If it goes past one block, it detaches another free block from the list, and writes to it, and so on until the file is written.
As mentioned, the file may not be even 4K multiple, so there may be a chunk of old data beyond the “end of file” marker on the last block.
When you erase a file, you mark the first character of the directory entry as “this file is deleted”. The directory entry is available for re-use and is ignored when listing files.
The data chain of blocks for that file is tacked onto the end of the free space queue.
So elementary file recover means guessing the first character, which tells you the statr of the file chain (and size). You recover that many 4K blocks from the free space queue - detach from free and add to recovered file directory entry. Since the chain stayed intact, it’s simple to recover the whole lot.
You can see where this fails. If the directory entry or any of the file blocks in free space gets reused, the chain is broken. You may recover some of the file. If the file is unreadable gobbldygook, like a photo, it’s harder to recover than if it’s text. You can easily read fragments of text.
“Cleaner” programs overwrite free space with random bits to remove readble files and fragments. They will verwrite the free space queue, empty space at the tails of all files, and erase file directory entries (so you cant tell what the filename was). of course, if you run such a program regularly, it may spend a lot of time rewriting tails of end blocks that were already erased the last time you ran the program.
Theoretically, an overwrite should be enough, but one theory says the write head can wander, so you may not completely overwrite the track - there may be residue of the previous write on one or the other edge of the recorded track. Similarly, writing may not flip all the particles 100%, so there may be a residue of the previous data detectable. Of course, this gets us into CIA/NSA territory. But, a good cleaner will overwrite all available blank space 1, 3, 7, or even 35 times with patterns and random bits so that there is no chance the empty space contains the residue of readable data.
Take a real-life example, that the computer 9/11 conspirator Moussaoui used for email was IIRC a copy shop public PC that got reconstructed regularly. If there was a time where any available resources probably were used, thi was it. Yet, at the trial they did not mention fnding any relevant data. After multiple reformats, probably they didn’t.
As mentioned, there’s also VSS which makes shadow copies of the disk in the free space. If this service is on, you want to turn it off and delete shadow copies before running the cleaner program. Plus, if the plug was yanked in the middle of running, there’s the swap file, where memory is freed up n the computer by swapping less-used memory to disk (pagefile.sys). Depending on disk use, that may or may not contain relevant data.
And, there may be various logfile , sent mail etc. that also track what you were up to. (The infamous IE cache, so you can tell if someone googled “Chloroform”). Empty that cache and delete all cookies before doing a disk clean to help hide your tracks.