Are recent Large capacity HDD's more unreliable than smaller?

Comparing fairly recent 8-12TB range of mechanical HDD’s to older 2-3TB of a few years ago particularly for NAS setups.
Is there any real difference in reliability?

Would much smaller density of data be more prone to data loss & corruption down the line? as well mechanical failures?

I did have some bad 2TB drives a few years ago, but I keep coming back to more capacity at 8TB-12TB must mean more platters, more moving parts to break, higher data density = less lasting reliable data down the line.
thanks

Not necessarily more platters - sometimes (in fact, mostly) greater capacity is achieved by greater density of storage (smaller magnetic domains, more tightly packed on the same size and count of platters, but that in itself can make the drives more susceptible to fault.

I’d say drives have become more reliable in that they typically now include firmware to monitor themselves for early signs of failure - as long as you don’t ignore those warnings, and you accept that hardware replacement is just a fact of life, then disk storage is generally more reliable than it was in the past.

I have no hard numbers, but I wouldn’t feel too comfortable with one of those drives that have to be filled with helium to work.

OTOH, I feel this is a plus.

HDD reliability is a constantly varying value. For a given manufacturer and a given line of disks there can be up and down cycles of quality. There’ll be a run of poorer than usual disks, they get some feedback, figure out the issue, then the later runs are better quality.

For an example of this read about the IBM line of “Deathstar” disks. IBM had been a generally reliable brand but then … oops. They sold off their disk division to Hitachi (which is now part of WD).

Some people prefer certain brands over others. WD seems to have a better rep than Seagate for the kinds of disks I shop for. But maybe it’s the other way around for server class ones. WD’s HGST (old Hitachi) models have a poorer rep than WD branded disks.

E.g., I just got a new HDD for my DVR. Only certain models of WD disks are recommended for this type of use and you have to very careful about the source.

And then there’s choices made by the manufacturers with what to do with a given run of disks. If testing shows maybe a run isn’t so great they might, for example, dump them to their consumer external backup channel. Those tend to have the least reliable disks. The maker figures they will be used less often and therefore the errors won’t mount up so fast.

So it seems the aerial density is more the issue?

As it gets much denser I would suspect the longevity of data will be alot less.

Or at least have alot of data corruption in a long period. Kind of like comparing CD’s that are scratched all over & still can be backed up vs. a BD that has just one scratch & won’t.

Loyuod,

From what I remember from doing some NAS / RAID research, the thoughts aren’t so much that the equipment is bad, or any more unreliable than in the past, but that the likelihood of encountering a non-recoverable read error becomes larger the more data exists in a physical volume.

This is one of the reasons I opted for a BSD/ZFS system.

This writeup is far out of date, but the logic behind it seems to be what I was reading over the past few years:

Also, as far as large drives being put through the ringer, I like to keep up with backblaze’s quartely HDD updates:

You might find some interesting information there, as they’re using some 6/8+TB drives very heavily.