Why doesn't CUT/PASTE fix disk fragments?

Fragmenting is when files become noncontiguous, duh. So let’s say i have a drive letter called O:\ and the only thing on O:\ is 5GB file called backup.bkf. For whatever reason today i decided to defrag C:\ with the graphical utility that WinXP has looking mostly blue and finishing with even more blue. When C:\ finished i move on to O:\ which only has the 1 file. There is no swap file on it, no hidden system folders for SysRestore or anything other than that 1 backup.bkf. Anyway, defrag shows that drive as red with dozens and dozens of vertical lines in the display.

I would have thought that doing a CUT/PASTE to C:\ temporarily and then a CUT/PASTE back to O:\ would fix it faster than doing a defrag, but it was just the same. So perhaps PASTE doesn’t paste contiguously? Any thoughts?

Why would it? If there are no available contiguous sectors bigger than the file you are pasting, it’s going to be written all over the disk. Besides, cut and paste probably doesn’t even move the file - it just changes the directory entry (WAG, I’m a Mac, not a Windows guy).

Things like “Cut and Paste” and even reading and saving files work at an entirely different level from the physical location of files on disk. When your text editor asks the operating system to save the contents of your text file to disk, it neither knows nor cares where on the disk the file gets placed.

My guess for the reason that your “Cut and Paste” approach t o defragmentation failed is that the operating system re-used the original sectors that contained your file to save the new sectors. If you want to defragment your disk, use the defrag utility.

Normally it would just change the directory but i’m using 2 different drive letters. . . actually 2 different physical disks. But i still would have figured that when writing a file to use contigious space just, to, ya know, b/c it seems like the right thing to do! Maybe it’s just me. :confused:

That’s what i figured, it just seemed very inefficient.

I understand you pasted to a different physical drive. Perhaps Windows doesn’t look to see if there is 5 Gigs of concurrent space free. Perhaps it just writes as fast as possible to the spinning disk as the head moves across, not in a linear fashion as a record player needle.

It seems not. I was under the impression that all file systems actually did look for concurrent space. I just assumed the head would write linearly and not jump all over the platter in seemingly random places.

It was efficient in the sense that it didn’t defrag all of O:\ first so you could write your file back in. It just took the space it knew was free and left well enough alone. True, this will make reading the file take minimally more time as it searches about for the bits, but that’s nowhere near the several minutes/hours it will take to defrag the disk.

You don’t say what file system is on the drive, so here are some general principles…

Any space allocation system (file system space, memory) is a compromise between speed and best results, and what you have seen is a result of speed being the overriding factor.

The File System maintains a Free Sector List. On a clean, empty drive, that FSL is a single entry of one big block. As allocations are made (ie files are created), the big FSL chunk gets smaller and smaller. When deletions are made, the newly available chunks of space are added back as new entries in the FSL. The FSL is usually sorted in some way, but (for purposes of performance) adjacent free entries will not be consolidated.

New files get space from the FSL. The allocation strategy can also be complex or simple (first fit, or best fit, start at the beginning). If a file can fit in a chunk in the FSL, it gets the chunk (and any left over is chopped off and added to the FSL). If the file does not fit into any free chunk, it gets fragmented over several free chunks, and left over bits get added to the FSL. At this stage, the allocation strategy could spend some time sorting and consolidating the FSL to make a single big allocation, but this costs processing time and may be too slow an operation, or too risky to attempt (particularly for a journalled or guaranteed written file system).

It should be obvious that after a few hundred file write and delete cycles, the FSL can get so fragmented that the free fragments are small and any file written ends up fragmented. With memory, there are usually FSL consolidation processes that kick in to ensure that bigger chunks of free memory are available. For a File System, that process is called defragmenting and may be a continuous disk management process, but often is not.

I guess that your drive was used for lots of small files at some stage. The FSL was fragmented, and the allocation strategy used for the file system did not consolidate, and could not find a single contiguous block to fit the file in. It then used a poor “start at the beginning” allocation scheme. Empty the drive, defrag it, and copy the file back.

Si

It should be noted that modern filesystems don’t need to be defragmented. The filesystems in use under Linux and the open-source BSDs (possibly including MacOS X, for all I know) don’t need to be.

(Missed the edit window.)

OK, they’ll effectively never need to be defragmented and it wouldn’t help you anyway. A good cite on the subject:

Would it be quicker to a) defrag as is or b) move file to temporary location (from O: to C: in the OP), defrag the empty drive (presumably defragging the Free Sector List but without the expense of moving around great chunks of file), then copy file back to the original drive?

move and defrag - both safer and faster.

Si