These hard drive descriptions mean...what?

I am looking at some internal OEM hard drives in the range of 4TB-10TB. Western Digital drives, refurbished, are advertised for very good prices on eBay from a China source with 20-40 day shipping times. I will be using them in one or more NAS devices.

These drives have grade descriptions like[ul][]enterprise[]data center[]performance[]surveillance[/ul]…which seem to mean something, but I don’t know what. Enterprise, I think, means a more rugged construction, greater MTBF, more appropriate for server use where they may be accessed continuously or at least frequently. But what do the other terms mean? Are they all sales hype, or do they have some real meaning? Why would I choose a “data center” grade over a “surveillance” grade?

Simple answer, avoid performance and surveillance.

Enterprise and Data center will have error recovery and vibration resistance features you need for NAS, performance will not. Depending on what kind of NAS the vibration will be very important but the NAS friendly error correction is important too.

The vibration part may not seem important, but watch this video.

https://youtu.be/tDacjrSCeq4

Surveillance will use shingled encoding, and will be optimized for continuous writes and will perform poorly for a NAS. As will some Enterprise Capacity drives, avoid SMR drives if possible.

Why is vibration resistance important for a NAS, which is expected to be quite stationary (not like a portable laptop)? (I haven’t watched the video yet.)

What’s SMR?

Why will continuous writes perform poorly for a NAS? Poorly in what way?

If enterprise drives are designed to be more rugged, wouldn’t those be best for NAS use?

It is all market positioning pricing and features. With SMR drives the space that the write head uses is wider than the read path, so they have to re-write several tracks to change one tracks worth of data.

Personally I would just take the hit and order the appropriate drive for your need, and you don’t know if these cheap drives have lots of hours or have been “cooked” in a data-center that got too hot.

While you may have to deal with a problem with a pin3 reset problem you can just shuck the Seagate easystore drives to save a couple of bucks, but you need to know exact models to make sure they may work for NAS.

If the data is important at all it is easy to be penny wise and pound foolish.

Hard drives are pretty much commodities these days, and if a drive is massively cheaper there is almost always a reason.

After watching that interesting video, I can say that my data center is much more modest than that. My data center is several units of 5-8 drives each, not bolted together like in that example. I doubt if vibration is a major problem. Heck, I have 8 drives scattered on my one desk for 2 desktop PCs.

The effect still happens, but you still have to have NAS drives if this is for work.

With that many disks and with this size the rebuild time will be in the length of days, and the performance drives will quit responding to requests as they try to recover data internally for long enough that your raid array may mark it as failed.

As for the vibration, when you have drives in a RAID array, they all want to write data at about the same time, you don’t need to yell at them they cause the vibrations themselves and your performance will be poor. While there are differing profit margins in the market segments desktop drives don’t behave like this so they save the few cents for that more price sensitive market vs worry about that vibration.

I used to think that, but with NAS boxes like Drobo, it may not be all that important anymore. I have 4 Drobos, two for over 6 years, with only one drive failure, and that was with a non-discount drive. The Drobos are designed to handle drive failures, even multiple ones at the same time.

So there may be a trade-off. Cheap drives, replace them often. Expensive drives, not so much.

Besides, by the time a drive fails, it’s obsolete anyway, and replacing it with a larger size and newer unit isn’t a major cost hit. Although you are right about the rebuilding time for larger drives; that can take days.

The drive doesn’t need to fail here, consumer drives don’t have the option of just reporting an error, the drive has to go into recovery mode, but RAID systems typically just want the error reported and view it as a bad thing when a drive goes away for a long time.

With 4 10TB drives in RAID 5 and assuming a 3 year life span and assuming ~5 days to order a replacement and replace a drive your risk of losing data per year is around 1 in 23.7

So really it just matters how important the data is to you.

I wouldn’t be so foolish as to trust my data to just one storage unit. It’s pretty scary when you have a 40TB NAS unit and think what could happen – all eggs in one basket.

So I have additional, off-site storage, mostly a bunch of bare drives, not to mention multiple Drobos storing mostly the same data. If one went totally south without any warning, although recreating everything would be a major hassle, it could be done. IOW, I’m that paranoid.