Can I test a hard drive for bad sectors, etc., without formatting it?

My guess is that the answer is “no.”

Here’s my situation. I have a QNap 2-bay NAS, and it currently has a 2Tb and a 1Tb hard drive installed. I’ve decided that i want to increase the capacity, so i’m gong to replace the 1Tb with a 4Tb.

(I don’t have the NAS set up in any sort of RAID configuration; it’s just 2 separate drives)

Anyway, ideally i’d like to be able to check the big drive before i open up the NAS and install it. I figured i’d throw it into basic plug-and-play enclosure, then run scandisk and such using Windows 7.

Windows recognizes the drive, but offers no way to scan it. I assume that i’ll need to format before scanning. The only option that Win 7 gives me for formatting is NTFS, but the NAS uses ext4. Is there any point in formatting NTFS then scanning, or would i be better of doing all of this in my Ubuntu setup, or should i just chuck it in the NAS?

Any advice appreciated.

would there even be sectors on the drive without formatting it? Doesn’t formatting create the sectors? I’m no expert but I thought that’s what was happening… aren’t the sectors on a NTFS drive different from a FAT32 drive, for example?

Yes. Steve Gibson’s Spinrite does exactly that.

From their FAQ:

SpinRite can run on any PC compatible system with a 32 or 64-bit Intel or AMD processor and a color screen. The previous SpinRite v5.0 is available to v6.0 owners who need to run SpinRite on older 16-bit 8086/80286 systems and/or monochrome screens.

SpinRite is self-contained, including its own bootable FreeDOS operating system. It can be used on any operating system and any file system. This means it can run on drives formatted with Windows XP’s/Vista’s/Windows 7’s NTFS and all other older FAT formats (in addition to all Linux, Novell, and all other file systems.)** It can be used to pre-qualify and certify unformatted hard drives before their first use.** Drives on non-PC platforms, such as Apple Macintosh or TiVo, may be temporarily relocated to a PC motherboard for data recovery, maintenance and repair by SpinRite.

SpinRite provides complete interaction with IDE-interface PATA (parallel ATA) and SATA (Serial ATA) drives, and it can also be used with any other type of drive — SCSI, USB, 1394/Firewire — that can be made visible to DOS through the addition of controller BIOS or add-on DOS drivers. To obtain the best performance, IDE drives can be temporarily removed from their external USB or Firewire cases and attached directly to the PC motherboard.
Note: See the SATA knowledgebase article for specific information about SpinRite v6.0’s operation with SATA drives and controllers

Most disk drive manufacturers have utilities that you can download that will allow you to do more detailed testing of a drive.

Modern disk drives are a bit tricky with respect to bad sectors. In the rush to make drives smaller and smaller with larger and larger capacities, the drive manufacturers started cheating a bit. Because the smaller size and higher platter density made errors on the disk surface much more likely, the drive makers started just allowing for this. Internally, the drive keeps track of damaged sectors and swaps them out for “spare” sectors allocated elsewhere on the drive. As far as Windows and any other higher level operating system is concerned, they access head x, track y, sector z, and aren’t even aware that the drive may actually swap that out for another sector.

There are utilities for most drives that basically do a low level format and re-create the drive’s internal bad sector list and swap-out tables.

So all of this makes me wonder, why are you trying to scan for bad sectors? Do you think the drive might have a problem?

Thanks. I might check that out.

I don’t have any specific reason to think so. I’ve been lucky with hard drives, from a variety of manufacturers.

It’s mainly that, if you read online reviews on sites like NewEgg and Amazon, a noticeable percentage of hard drives (from most manufacturers) fail early in their life. I guess i just thought it might be good to check a large drive like this is order to head off the hassle of having it fail later, when it has a whole bunch of stuff on it.

If people don’t think it’s worth testing, i’m happy just to throw it straight in the NAS.

Actually, maybe not. This isn’t important enough for me to spend $89 on a piece of software.

The NAS is mainly used for movies and music and other non-critical stuff. All important documents on the NAS are backed up in at least two other places, including online storage. If a drive dies, the only real cost (apart from the drive) is the time required to re-rip the music and videos.

Sounds like a job for the Ultimate Boot CD to me. Pick the HDD section, then… I guess Diagnosis?, then an appropriate utility for your disk, and let it run. (I’ve heard things either way for company-specific utilities, so maybe run a generic one also).

I’ve knows this for a long time, but I’ve also wondered for a long time: Doesn’t this make a mess of disk scheduling and optimization algorithms?

It does wreck your plan to use each sector on the hard drive as the page of a 1 billion page, 4kilobytes per page encyclopedia. But… this is not feasible way of using hard drive sectors.
Sorry, but the remapping of the sectors is just insignificant performance hit,
You’ve already got the file system doing its own admin (storing multiple copies of filesystem’s own “metadata” data all over the partition ) and the conundrum of how to make a file grow while keeping the data in an efficient pattern… the file gets fragmented…

Meanwhile only the worst sectors are remapped. Along the way read errors are repaired by writing the sector again. If the sector won’t be written again, it will be remapped. The sector error detection/correction/remap is system is not a problem, its a very important feature.
Windows has a partition check … you can right click the drive letter in “my computer” and get to tools and run a check. It will let you know if there are problems.
With regard “format” its been a long time since hard drives allowed “format”.
back in the 80’s it got turned into “test only”. The format instruction merely conducts a test…it returned results as if it was formatting to keep the program (spinrite) happy.

I’m aware of that, but in order to allocate a drive letter, the drive itself has to be formatted first. I could format the drive in NTFS and then run the check. My question was whether or not there was a way around this.

The infant mortality of HDD’s is almost never due to ‘bad sectors’, it is a failure due to being dropped (shipping damage), a failure in the electronics, or contamination of the platters. Just scanning a drive once won’t show this up any more then just using it.

Those old fashioned optimization algorithms are already pretty well mucked up by the large data caches in modern drives. If you write to track 1, 20, and 10 (in that order), in the old days you were guaranteed a large wait as the head did a track seek from 1 to 20, and another wait (though not as large) as it moved back to track 10. These days, you write all 3 requests into the cache, and the drive acknowledges the writes long before the head even starts to move. Then the drive flushes out the cache as time permits. It may write the tracks in the order 1, 10, and 20 so that it minimizes the movements of the head. Then if you go back and read track 1 later, the drive won’t actually move the head and read the platter at all. It just gives you the data that it already has cached for that track. If the drive is idle, it may read a bunch of data that it thinks you might need. Caching algorithms have a huge impact on overall system performance, so there’s been a lot of work in recent years on maximizing those algorithms.

Things like swapping out one sector for another on the fly (and hiding it from the OS) get lost in the wash, because reading and writing isn’t directly linked to the platters these days. Likewise, doing things like trying to optimize your disk writes to minimize head movements doesn’t buy you anything these days because you are only writing to cache memory. More head movements do slow things down a bit, but the actual impact on the overall performance of the drive isn’t as simple as it was in the old days.

Fair enough. I guess i’ll just throw it into the NAS and hope for the best.