Most of my experience in running a software RAID has been RAID 1 (two disk mirroring) on CentOS. I’m considering going to a software RAID 10 using four disks.
The biggest concern I have, is if one of the drives fails, after a new one has been swapped out, how much of a hassle is it to get the new drive formatted and working back in the RAID?
I’ve looked at RAID 1 disks independently and from what I recall, they look like normal disks. What do software RAID 10 disks look like? I’m asking because I was using this calc:
And the RAID 10 is doubling the drive capacity available, so I’m wondering if the drives are no longer regular Linux Ext4 looking drives? Does the striping/parity do something different to the file system?
Within each pool, both disks will be mirror copies of each other (RAID 1).
When writing to disk, the software will stripe the data (RAID 0), treating each pool as a single disk.
You will be able to lose one disk from each pool before data loss occurs.
If you lose both disks from a single pool, you will lose data
The reason that you get more capacity in the RAID 10 setup is that the calculator you linked assumes that in RAID 1, all of the disks after the first are mirror copies of the first disk (so, if you have 4 1TB disks, you get 1TB of primary data and then 3 copies of that data). The RAID 10 setup breaks the disks into 2 sets of 2 disks (so you get 2 TB of primary data and a mirror copy of that data).
You might also consider going with a more modern file system: ZFS. Whilst not a perfect solution, it is vastly more flexible than simple RAID solutions with ext3, and has a remarkable level of resiliency in the face of failures. RAID 10 has some appeal, it has a lot of down sides, some of which you have already seen.
Unless you are deploying this in an application where drives fail frequently and it is difficult to replace them, there are few reasons to go with 3 mirrored copies of a single drive. You would get better data protection and value for your money by using the spare drives as part of a different RAID level or as backup storage.
RAID makes no difference to the file system. The filesystem has no knowledge of the existence of the RAID system. All RAID 10 is, is taking a pair of disks and striping them (with some options on how this is done) and then duplicating that to gain both resilience, and the possibility of improved read throughput. The RAID system can be below even the partitioning of the disks (but that depends upon the exact system used.) RAID 10 has no parity or error correction. It depends upon a disk going fail/fast or fail/stop to detect data errors. Other failures can still lead to data corruption. Delivery of inconsistent data from the mirrors leaves you with no way of knowing which is the correct one. So you depend upon the internal error detection/correction within the disks.
The striping layout chosen can affect the performance of the file system, but this is not a simple choice. It depends upon your particular use of the file system what layout will be the best.
A two disk mirroring is fine, until one drive fails, that leaves the system running with just one spindle (one drive). That’s fine if it is in a machine room with 24/7 staff to get another drive out of inventory and replace it. But for me, this is for a server I use for development. Murphy’s law or whatever, chances are when a hard drive fails it will be at the worst time that I won’t be able to give it my full attention to replace the failed drive.
Therefore, for the new system I’m looking into a 3 or 4 disk setup. I think a RAID 1 with three disks, and the 4th being used to do an rsync backup every 4 hours would be good. I don’t imagine I really need 4 disks being mirrored, but three won’t reduce a single failure down to everything operating on just one drive.
For my development server (the old one) I have been running a software RAID 1 with two drives, and have a third drive connected doing the rsync backup every 4 hours in the CRON. This has been good, because if I or one of the users makes a mistake like removed or corrupts a file, I can retrieve it from the previous 4 hour backup.
I am considering the three disk RAID1, the 4th drive doing a backup, and one other drive being used to archive the last months worth of changes and additions. Just in case something gets really messed up and we need to go back to a file 2 weeks ago. Still looking into what Linux software open-source solution would be good for this.
Addressing this question specifically, I’ve found RAID under Linux to be fairly easy to manage, and in fact replaced a drive today with maybe five commands. Syncing is relatively transparent; after partitioning and inserting a disk into an array, it runs in a degraded state during resync until complete. The biggest hassle is drive access (depending on the machine of course).
I typically run RAID5 in situations such as yours; assuming performance isn’t an issue, you would get the same capacity running 2+parity+spare and avoid the hassle of a swap under failure.
ZFS is pretty slick as Francis Vaughan mentioned. I’ve used it under Solaris but it doesn’t have enough mainstream Linux support for my comfort.
Couple of things to be aware of. Use a partition slightly smaller than the disk to account for small changes in geometry across vendors. Be careful if you are running your boot devices under RAID. It works fine under Linux, but you need to ensure your MBR (e.g.) are installed correctly after swaps and align with your BIOS settings.
The standard mdm package will handle this no problem.
Just to add, if this is a development server and you are worried about deletion/overwriting/file screwup/… you should be using a code management and versioning system (git/cvs/mercurial) for that, so you are not relying on backup/restore for that sort of functionality.
Thanks for the post. I looked at ZFS, but for the same reason at least for now it doesn’t have the same support that good old mdadm stuff does.
I’m glad you brought up performance. I’m planning on using WD Red 7200 RPM drives for the RAID. Seems like these would be fast enough not to be an issue for RAID5? If I’m going to order four drives away, if I don’t like RAID5 for some reason, I could always rebuild it for RAID10.
I have Google searched about RAID5 vs RAID10 and the few opinions I have seen favor RAID10. But this is for a development server, it isn’t a server that’s going to be hammered with web traffic 24/7. So I don’t know if the performance of RAID10 is going to even be noticeable or not.
But with RAID5, a 4 TB drive ends up being 8 TB? So I would only need 2 TB drives if I was planning on 4 TB total being available?
A very good thought. I was thinking more about things like documentation of projects written in Open Office which we keep on the development server. So those things aren’t actual programming code.
Git does not care. It handles binary files, but you don’t get change analysis.
A better documentation management tool is something like Silva. This is a web based document system that does versioning, rich formatting, high level substitution and lots of other things as well. Like a wiki, but with versioning, workflow, and can export a document tree to a fixed format.
Those sound fine speed-wise for a development server. FWIW, companies like Backblaze publish their reliability data and have some pretty good sample sizes behind their results (2015 review). I’ve personally gone to HGST drives after a bad lot of Seagates.
RAID5 gives you N-1 capacity, so you would need three (2TB) drives for a 4TB array. Four 2TB drives can be configured as a 6TB array, or as a 4TB array with one spare (mdadm will automatically incorporate a spare on drive failure).
Thanks for the link for the review of drives. Wow, Seagate really took a beating there.
I remember reading a report, I think it was by Google or the data was used from Google to determine who makes the best drives and failure rates. The conclusion at the time if I remember it correctly was that all the major makes of drives were about the same. But that has to just be a snapshot in time, because who knows how many times the company’s engineering design has been changed.
At the time I remember feeling that if someone offered a 5-year warranty on a drive then you have a reasonable expectation for it to last longer without issues than 1-year or 3-year warranty drives. I think then only HGST drives (Hitachi) offered the 5-year warranty drives. But I see WD does now with the WD Red Pro drives.
I was mistaken, it is WD Red 5400 RPM which scales up to 7200 RPM when needed. The WD Red Pro are 7200 RPM for the constant speed, which we really don’t need.
Why do you want to mess around with software raid when enclosures with built in raid are so cheap nowadays? With the hardware enclosures its literally one button press to start rebuilding after swapping a failed disk. Eg take a look at this one for $149, but newegg has a bunch more to choose from.
I’d recommend RAID5 as well. That loses you less space but still protects against a single disk failing, and if you have a hardware raid enclosure then as I said, replacing a lost disk is incredibly simple. You don’t even need to take the raid volume offline.
I know that hardware RAIDs were all about performance, and that might may still be the case but the gap has closed on them. And with this being a development server, just a few of us will be using the system. It won’t be used to serve hosting traffic for a high volume website.
I’ve consider a hardware RAID, but don’t like the idea of the hardware RAID controller failing (or its power supply) and not being able to access any of the drives. Also, I would be locked into a specific vendor’s hardware to try to get a replacement from what I’ve read. With the software RAID I can move the drives to another system in the event of a failure and continue. With the new server, it will have four drive slots internally, so I don’t see a need to have them run externally in an enclosure unless I’m missing something.
After looking over the cheat sheet for mdadm, it looks simpler than I recall.