Disk defragmentation - c'mon, be honest with me now

I’m wondering whether disk defragmentation does anything of appreciable value these days for a basic PC

Back in the day - early mid 90s, when I first got a computer - sure, I remember my hard disk grinding a bit when I was due for a defrag. I think I even remember system slowdown, but that was over a decade ago and I wasn’t exactly keeping records.

But hard disk read speeds have increased fairly steadily, as has RAM speed and volume, and has page file implementation (At least for Windows, as far as I know).

I wonder if hard disk read/write speed is any sort of bottleneck anymore, and whether defragging helps out.

These days - even running basic benchmarks I never see a significant difference before and after disk defragmentation. I never hear my hard drive running (they’ve gotten nice and quiet). And I always get frustrated that defrag (I’ve used Windows default and DiskKeeper) takes nearly a metric hour to run.

Does defragmenting a hard drive actually benefit it anymore? I especially wonder since programs that purport to defrag better than Windows’ default are sold for actual money.

You’re assuming everyone has the latest and greatest hardware and software.

My home machines run WindowsXP Pro on hardware that’s more than six years old. I have no reason to upgrade when they work just fine.

My work machines run Windows2000 on hardware that’s probably five to six years old. I have no say when it comes hardware and software upgrades, although the announced timeline means we will upgrade to XP about the time Vista is decommissioned by Microsoft. (Some of our critical machines at work still run MS-DOS because there is no upgrades available from the original DOS programs. And don’t suggest we join the 21st Century, either. I work for a federal agency that is not in the business of war and paranoia so we really don’t have the money.)

I have enough experience to know that disk defrag tools have significant benefits for me.

Eh. I’m running Win XP Pro on a system that’s gone 4 years without a hardware upgrade.

I can think back six years, and even then - defrag didn’t seem to do much.

I haven’t seen a real benefit since my Win98 days, back in the mid 90s.

…and I did run Win 2000 back when it was new - lousy, lousy OS for a home user. I used it anyways just to be difficult :smiley: But even then - if your work mandates it - Win2k never seemed to me to gain much from defrag

I’ve never found defrag to do much at all, even in the old days. In fact, last time I defragged (a few months ago) it caused my hard drive to die.

It all depends on the filesystem, really. FAT and FAT32 partitions really need) to be defragged every 3-6 months or so, or eventually they’ll corrupt themselves. Plus, I remember making as much as half a gig of “new” free space simply by defragging the FAT32 partition that I used for games, and which saw a lot of installing/desinstalling action. FAT was evil like that.

NTFS partitions (that’s essentially Win2k/XP partitions, probably Vista too although I stay clear away from it) are a lot better and can work all right even with significant fragmentation, but eventually you’re still bound to see some kind of slowdown, or rather your fancy superduper harddrive will be less duper.

There’s no helping it, it’s like those gamebooks from the 80s : read half a page 45, then go to page 248, read three lines, go back to page 3… No matter how you look at it, the shuffling time alone makes them harder/longer to read than regular books. Same goes for heavily fragmented drives.
Your disk may heat up faster, you may experience choppiness or rubberbanding (that’s when your app slows doooooown then catches up superfast) etc… Maybe not enough that you’d notice using regular work stuff like databases, word processors etc…, but moviemakers, code monkeys, gamers etc… running hardcore graphical apps and other stuff that cause lots of data swapping know the value of a clean drive.

Yeah, it’s largely a hangover/old wive’s tale from the days of FAT. All file systems get fragmented, but with better designs it is much less of a problem.

It depends on what you’re doing with the disk. If you’re frequently reading/writing small files, even an NTFS filesystem can slow down from heavy fragmentation. One notable situation is software development using an auto-compiling development environment. With even a moderate-sized project, compilation can slow noticeably after several weeks of use.

An hour to defrag? Need to do it more often. Mine never takes over 10 minutes. Just the stock XP-Pro one.

Of course I keep massive storage that never changes on a different drive or three or 5… Hard drives are cheap. My little HD LED blinks less often with an organized drive.

YMMV

This is purely anecdotal, but I scanned and defragged the hard drive on my friend’s computer (HP, less than 3 years old) a couple of weeks ago and the performance noticeably improved. Your mileage obviously varies.

Badly-designed filesystems need to be defragged, and FAT is the worst design still in widespread use. To see why, look at this page.

To summarize the explanation given there, FAT uses a best-fit algorithm which causes big problems if a file ever grows (or shrinks), whereas well-designed filesystems use a worst-fit algorithm (well, kind of) that allows for plenty of growth. In fact, you want fragmentation with a worst-fit algorithm because you want files to be positioned near other files on the disk, in the ‘holes’ left over from previous allocations.

(So, why does FAT suck so hard? It was designed for floppy disks, and it was assumed that most people would only have a few files per floppy and use new floppies to save new files. Remember that a hard drive was optional in the original IBM PC.)

Floppy disks were optional in the original IBM PC. The original model could be used with a tape cassette recorder for mass storage. A hard disk was not available from IBM until the introduction of the PC/XT.

All file systems have pathological cases that can produce fragmentation. It’s often a question of just picking the right file system for the usage patterns that you expect, or in some cases, writing your own.

That goes to emphasize my point: The early filesystem for the IBM PC just wasn’t made for this modern world.

Even if fragmentation is taken advantage of, there’s always a use-case that produces an inefficient data layout on disk.

My point is that FAT in all its forms is horribly obsolete, was badly-designed for a hard drive filesystem even when it was new, and certainly isn’t the standard by which any other filesystem in modern use should be judged. More to the point, the misdesign of FAT is what produced the modern Windows obsession with defragging, which is completely out of place the moment you replace FAT with a filesystem that doesn’t actively suck and blow.

You are assuming that “all” file systems are designed with contiguous storage in mind. One exception is the as400 which, by design, scatters the pages of each file/object across all of the available disks. This allows data to be read in parallel by lots of disk arms as well as reducing contention on popular resources.

Kobal2: “rubberbanding”! Excellent term. And my ignorance fought. Welcome to the board, by the way.

Novell Netware had a proprietary disk storage scheme and absolutely no defrag utility for their server software. I ran a 3.11 server for ten years with no apparent change in performance.

Two reasons why it was a good system, but might not translate well into a single-user environment. First, it was a multi-user server, so the assumption was made that read & write requests would be arriving rapidly for files that would have no commonality. It dispensed with sequential reads, but queued all requests from all users for a very short time. When a maximum time had elapsed (milliseconds, configurable) or the queue filled up, the requests were sorted so that the head-arm (multiple heads on one arm in a typical single drive) would make a single sweep from inside to out or vice versa, reading all data without changing direction, and without regard to the user that made the request.

It also cached the entire directory data in RAM so the head didn’t have to return to the directory storage area for each request as it does for PC floppy access. The directory was written back to the drive as a low-priority task. There was always the danger of a power outage preventing an update, but an UPS fixed that.

This was just one of several concepts of which Novell was more advanced than Microsoft.

Thanks for posting what I was going to, Musicat! But there’s no need to use the past tense, Novell and NetWare are still going strong - well, strongish, anyway.

Musicat: Novell was just following standard practice of the era, which is still by and large standard practice to the best of my knowledge. (The only thing I think they’re missing is journaling.)

Finally, assuming ‘single-user’ means ‘single-tasking’ is wrong and always has been. A single user often has multiple tasks running in the background (possibly hidden as part of the OS design). FAT was built assuming single-tasking, which made sense given the severe limitations of MS-DOS (if not so much the hardware itself).

It’s become more difficult to do I/O scheduling as the disk geometry has become effectively hidden behind the abstraction presented by the disk controller. There will always be a need to make a choice between throughput, fairness, or some combination of the two.

Says you. I used it at home for six years. It’s really not much different at all from XP.

I can’t contribute anything very technical, but I can say that when I started running Diskeeper on my laptop, stability improved a lot. It used to hang on shutdown all the time, now it never does. I didn’t notice any subjective increase in speed, though, even though diskeeper claimed about a 30% improvement.