Tool for bad sector repair on hard drives

I’d say Spinrite (was the go to in DOS days) but it hasn’t been updated since 2004. Wackypedia seems to think it would run just fine today.

I wouldn’t bother.

If your drive has any bad sectors, it’s probably at most a few weeks away from total failure. You should start planning how you’re going to transfer that data to a new drive instead of trying to “fix” this one which is almost certainly on its way out.

EDIT: oops I got tricked by the zombie thread, sorry.

I agree with venerable old Spinrite; however bad sectors do not mean imminent collapse. All metal drives have some bad sectors straight out of the factory.
Maybe even SSDs too.

In both cases, on-board firmware in the drive already has those factory-bad sectors blacklisted, so the OS (even low-level utilities) will never see them.

The sudden presence of new bad sectors IS a harbinger of eventual disk failure. Often, loss of accuracy in track location meaning that sectors in that physical cylinder start dropping out during read or write operations.

And hard drive platters haven’t been metal since last millennium.

It might run, but I wouldn’t expect it to be able to cope with today’s hard drive capacities.

From that same Wikipedia article: “As of 2015, laptop hard drive platters are made from glass while aluminum platters are often found in desktop computers”

And as an amateur smasher of hard drives, I can attest to the accuracy of the Wikipedia statement: many small laptop drive platters smash into smithereens, while big desktop drives still have metal platters that dent under a heavy hammer blow but don’t shatter.

The reason it is taking so long is that the damaged (and even semi-damaged that don’t show as damaged) take a long long time to check and mark as Bad. Just for the time factor, the drive looks really bad and is for recycling.

Young’un :smiley:

Try rebuilding the drum on an LGP-30. Note that the drum was the main memory on that system, not “mass storage”.

The 6061. Nice drive, if a bit phyically large for its capacity (even when introduced). The Diablo 44 was the DG 4234. I think the in-house DG equivalent was the 6045.

That (still) isn’t my experience at all. But I use mainly enterprise class drives. The manufacturers are just trying to avoid getting a bunch of drives back that turn out to be “no problem found”. That is mainly a consumer education issue.

With SSDs, all of your data can vanish in an instant if there’s a catastrophic firmware bug. That was a lot more common when modern flash-based SSDs were a relatively new product, but even today some big-name manufacturers have bugs like that every now and then. And at least one manufacturer intentionally configures their firmware so that once the available replacement blocks are used up, the drive will continue to work until the next power cycle. But after that power cycle, the drive won’t go online and you can’t even read any data from it.

SpinRite was mostly pointless even when it was first released, other than telling the user “your drive has bad sectors”. 5 years before SpinRite was first released, the SyQuest SQ306 drive was among the first to have an embedded “wedge” servo, instead of either dead reckoning or a dedicated servo surface. If your controller let you actually do a track format and you didn’t specifically know about and do special handling for the wedge area, you’d render the cartridge unusable by overwriting the wedge. So SpinRite is just trying to read defective sectors over and over again, and then rewrite them in the same place. It doesn’t get to write the preamble / sector header / post-gap as those are part of the factory format. If the original bad sector was caused by an off-track write, write splice error, or similar, those parts of the factory format are gone for good.

There were some low-level tricks you could play on MFM/RLL ST506 controllers, because the controller was responsible for interpreting the bit stream from the drive. ESDI moved some of that onto the drive, and SASI/SCSI moved pretty much all of it onto the drive (or at least hidden on a controller behind a host adapter). There were still some commands like “Read Long” that would provide some potentially relevant data from unreadable sectors, but by the time modern SCSI drives became available, this was unlikely to yield much of anything.

One can always run ‘Bad Blocks’ from a Linux OS.
Nothing like suddenly getting a ‘Bad Superblock’ message if an OS doesn’t boot. And working out how to replace it…

Many years ago I used to run something called HDD Regenerator*. It was a pretty amazing tool. Restored several disks to working condition sometimes losing only a few sectors of data.

I wouldn’t really trust those drives again but it allowed me to recover data better than any other similar tool.

With SMART drives and all that I haven’t touched it in quite some time.

If you have a pre-SMART drive you can try it. Not sure how’d it work on a SMART drive even with SMART turned off.

  • The price seems a lot higher than I remember it.