Would You Trust A 2tb Hard Drive?

I’m not saying drive failures don’t happen. Further, failed drives are a self-selecting thing. (You only notice the drives that fail, don’t you?)

Another interesting excercise is to count up all the mass storage devices* you have, include DVR’s, MP3 players, Game Consoles, Laptops, Computers, Digital Cameras, Phones, thumbdrives.

  • = it used to be just Harddisk based things, but now that Flash is cheap, look at how many things have 4 Gb or more of any kind of storage.

While you are right that RAID-5 uses distributed parity, Pete correctly called out the RAID-1 vs. RAID-5 disk usage difference. RAID-1 is simple mirroring, generally requiring an even number of disks with half of them used for redundancy. If Sage Rat is using three disks and is only losing 1/3 of his space, his configuration is RAID-5 (or 3, or 4, or some other exotic unused configuration).

I come from the school of thought that 4 500gb drives reduce my risk of data loss over one 2tb drive. I have a few 1tb externals, though and they seem to be working fine.

These days, disks are cheap, so a raid configuration is recommended if possible.

My company is in the business of video storage and in my lab I have two large storage configs. One is 384tb and the other is 256tb. It’s pretty awesome. :wink:

Disks may be cheap but controller infrastructure for large NAS units isn’t always so cheap. Many applications could be far more expensive to have say 20 500GB drives vs 5 2tb drives with an effective backup plan.

Could be a newer flavor of RAID called SAFE50 - 50% of each physical disk is used as a standard RAID 1 mirror, and the other half of each disk is spanned JBOD*-style into a separate non-mirrored volume.

If you have two 1 TB drives, you’d wind up with a 500 GB RAID 1 volume and a 1 TB spanned volume. Sounds nifty until you realize that the odds of losing data on that spanned volume are double that of a non-spanned disk as it’s containing data on two separate disks, and there’s no telling where your data is physically living until it’s no longer living.

  • Just a Bunch Of Drives. This is a no longer popular method to take a random collection of drives and have them act as one single drive with a capacity equal to the sum of all the drives’ space. Used to be popular in applications that use huge amounts of data such as video editing. JBOD made it easy to create a single volume for the application to talk to. As you might guess, it’s essentially obsolete now that we can buy unthinkably large drives for unthinkably low prices compared to even five years ago, never mind 20+ years ago when the scheme was created.

Sure. But I wasn’t thinking of enterprise level solutions when I posted that. I was thinking of what my personal preferences are for my own disk setup(s) at home.

In the enterprise, it’s almost a certainty that some sort of raid protection scheme is in place to protect agains drive failures, reduction the exposure when using multi tb drives. I have no particular concern in that kind of scenario. The data is protected.

At home, however, if I’m not running a raid configuration, I’m more likely to use more spindles of lower capacity. That said, I’m not particularly adamant on that.

I really wish Drobo’s products weren’t so expensive. Raid redundancy, any sized disks you want, pull out the smallest and replace it with the current new hotness and increase your space. I just don’t wanna add $400 to my home infrastructure costs.

(pssst. Home - Drobo )

Never trust any HD over 30.

rpm? Mbps? cm? yes?

I’m looking for a 1 TB drive to use for expansion storage on a DirecTV DVR. I’m not really getting a good feeling about the external drives I can find at the local big box stores, so I’m looking at an external enclosure and a bare hard drive. Any opinions if this ‘enterprise grade’ drive would be a decent bet? Newegg has if for about $170 which is a big hit, but keeping peace in the house by not missing shows would be worth it. Does ‘enterprise’ mean much?

Well. It mentions 1.2 MILLION hours Mean Time Between Failure.

That’s 137 years. (!)

Anybody here had a drive last 137 years 7x24x365 useage yet!?! :smiley:

Drives are more reliable than they’ve ever been. The only problem with a bigger drive is that if you cram it full of data, you’re going to lose more at once. But proper backup planning negates it anyway. I have all of my hard to replace data copied across at least 2 hard drives.

I can’t remember the last time I’ve had a drive fail, actually. My Raptor has been going strong for 5 years now, and it actually came with a 5 year warranty, so you can see how much they trust it. I’ve had a 250gb western digital going for 3 or 4 and a 640gb that’s maybe 2 years old. I had a pair of 80s before this, but they got thrown aside to make more room in my hard drive bay rather than dying. Even the 30gb IBM deathstar drives I had before that didn’t die, IIRC. I think I have to go back to some 4gb drive from a decade+ ago since I’ve had a failure.

Not sure if you are being facetious here or not - but I’ll bite anyway.

MTBF is one of the most commonly misunderstood terms in hardware reliability. Most people do what you did above - drive is rated for X hours MTBF, and there are 8700 hours per year - so 1.2 million hours/8700 = 137 years.

MTBF is only a part of the equation. The other major factors are service life, and the fact that MTBF is supposed to be relative to a large sample set. If a drive has a MTBF of 1.2 million hours, but a service life of 5 years - it would appear that something doesn’t quite add up. But what it really means is that if you have a large number of these drives running 24x7, you’ll on average accumulate a total of 1.2 million running hours before one fails. But each drive individually has an expected service life of 5 years.

This page does a pretty good job of explaining it.

Also - to illustrate a point, take a look at the third paragraph here. Basically points out how one could come to a MTBF of 800 years for a human - so obviously there is something more than a simple # of hours calculation.

Oh, and to keep this on topic - yeah, I’d have no problem trusting a 2 TB drive. I work in the storage/backup industry, and there’s very little different in mechanics (the part that fails) between one storage capacity drive and another. It’s all about arragements of magnetic bits on a platter.

No, I didn’t understand what it meant. I fully understand what it means. I also understand Infant Mortality and what the ‘mean’ in MTBF means. I also understand that these drives can die under a bunch of reasons not indicated by MTBF.

I also know that the common failure modes in the past (stiction caused by a redistribution in lubricant in the bearings) have been corrected in ways that mitigate those failures (magnetohydrodynamic fluid bearings) and that the capacities of a drive today will seem insignificant long before the vast quantities of drives have aged out.

I have a DVR with an 80 Gb drive. It’s 10 years old and continues to keep chugging along on the TV in our bedroom. When it fails, I doubt I’ll be able to find a PATA 80 Gb drive to replace it with.

So, all things being equal, a number of drives will die in the first week, a number will go on damnear forever, but how many will die right at 5 years? How many will be replaced before the equipment is hopelessly out of date in 10 years?

It’s at this point I’ll shoot holes in my argument by pointing out a percentage of Apple Time Capsules (With ‘Enterprise Grade Hard Disks’) are dieing around 17 months.) And reiterate that redundancy of data is critical to any backup regime.

http://feeds.gawker.com/~r/gizmodo/full/~3/eJFH0aybJMo/are-apple-time-capsules-short-lived

You get a bigger drive one of two ways:

Increase areal density or increase platters.

More platters means more heads and more platters and heads means more parts to fail.

I don’t think anyone is putting 2 TB on one platter yet.

In addition to moving parts failing, drives can fail due to firmware issues, hardware issues, environmental issues. They’ve gotten better over the 40 years they’ve been made, but they aren’t perfect. And EVERY manufacturer takes their turn with quality problems.

Always back up or RAID or both your drives.

Another IT truism: Raid is NOT backup.

Copy your bits somewhere else. All it takes is a controller fault, two dead drives, allocation table corruption, user error, and you’re toast.

Preach it!

RAID is the biggest con. Has almost no useful benefit, and almost guaranted failure and frustration. Do not succumb to the temptation of nerdy machines, RAID is a scam.

Raid is a quick way to keep a system operational in case of drive failure. Lots of useful benefit on a server or in a SAN. Some useful benefit if you really cannot stand to have to wait redownload your WoW install if your drive fails and would rather have the drive rebuild overnight.

It is, however, not backup for the reasons stated above.

My 500gb drives were quite noticably obviously faster when they were raid0d

ummm. No. RAID is exactly what it says, and certainly provides the functionality advertised.

It’s not a backup, but no one ever thought it was (I hope). RAID, properly selected and set up, provides redundancy (well, except for RAID0, but that’s a given), which increases data availiblity, but does nothing to prevent data loss. It also provides higher throughput (speed) in some configurations.

The cost/benefit of a RAID1 set up in a home machine is pretty simple: is having access to my machine while I wait for a replacement drive worth the cost (remember, RAID is not a back up) of a second drive? Are the odds of losing a drive (without losing the entire system) enough to warrant the cost?