RAID & SATA questions...

Considering getting a RAID-5 card… never used RAID or SATA anything before.

  1. To my understanding, a RAID card, like SCSI controllers, require a reformat of all hard drives before they can be attached to the card. Correct? If so, it seems to me that if you get a RAID card for data redundancy, you’re placing all your faith in the card as opposed to the hard drive. Isn’t that risky?
  2. Can additional HDs be added to the RAID arrays without reformatting?
  3. Can RAID cards be transferred between motherboards without loss of data?
  4. What the heck is SATA II? Is it really “worth it” or just a waste of money for virtually all home users just like SCSI u320? Are SATA II drives backwards compatiable with regular SATA controllers?
  5. What the heck is PATA? Did someone suddenly decide within the last 2 years to start calling all IDE drives PATA?
  6. Just how difficult is it to work with RAID? I’ve still got nightmares from when I was using SCSI drives… (never use a SCSI controller built into the motherboard)
  1. Yes it is a risk, but cards are much less likely to fail that hard drives with mechanical parts. And at least when the card fails, there is a good chance it just shut down without taking your data with it. Of course you may have only one CPU, video card, motherboard, etc. so it’s just another non-redundant electronic component that can fail and leave you without access to your data.

  2. Maybe - consult the card documentation. Certainly this would be no problem for RAID 1, but RAID 5 would require the data to redistributed over the disks - like of like a defragment operation.

  3. Almost certainly

  4. Beats me. I’ve heard of SATA II but haven’t checked it out.

  5. Parallell ATA, the new name for old style ATA/IDE drive.

  6. With a RAID card it pretty much transparent. Set it up and it all looks like one big disk.


Yes and no – hard drives have mechanical parts which will simply wear out over time. Cards have no moving parts, and even if they do go bad, all you’d need to do was replace the card with an identical model and the RAID would still be there.

Not to a RAID-5… or it would become a RAID-6 (no such thing). The idea of a RAID-5 is that you get 80% of the storage of 5 drives, but the reliability increases from R to Rsup[/sup], roughly.

That’s the idea. There may be some configuration hiccups; I would use the RAID-5 for data storage and a stand-alone drive for the operating system, so that you could get up into a working OS environment without needing to configure the hardware by hand.

I have no idea.

Yep. It stands for “Parallel Advanced Technology Attachment”; ATA is a standard used back when one kind of PC was the PC/AT. So the ATA is really just the form factor. PATA distinguishes ATA from Serial ATA (SATA).

I’ve had a RAID-0 go bad on me, which was my own damn fault, and it was a nightmare to fix. I’ll never put my system files on a RAID-0 again, but other than that experience, the RAID is transparent. Your system should treat it as a single volume. You can partition it any which way you like, too.

If the RAID card goes, you’ve lost access to the data but not the data itself. Access can be restored by replacing the RAID card with an identical one.

Possibly. It depends upon the RAID card. You won’t be able to add directly to an existing array, but you may be able to add a drive as a hot spare. Also, if you add sufficient drives, you may be able to create a second RAID set without disturbing the first.

Yes. I’ve done this many times.

PATA is Parallel ATA as opposed to Serial ATA.

It’s no different. You make your RAID and away you go.


Not really. The number “5” in RAID-5 has nothing to do with the number of drives. We have 3 drive RAID5 arrays here, and we have 10 drive RAID5 arrays. The 5 just designates the redundancy algorythm. In this case, it stripes the data across the disks, with some parity built in so that if one drive fails, the data stored on it can be rebuilt using the parity data on the other drives.

IIRC, SATA-II includes something called NCQ, or Native Command Queueing. The drive itself sorts out write/read operations and head positioning in the most efficient order.

Essentially, if the requests coming at it are

Write Sector 700
Read Sector 500
Write Sector 90
Read Sector 100
Read Sector 300

and the head is sitting at Sector 50, it will internally juggle that and do it as

Write Sector 90
Read Sector 100
Read Sector 300
Write Sector 500
Read Sector 700

in one sweep, rather than a lot of back and forth head positioning.

So far, NCQ is of great benefit to servers that are continually moving lots of data in and out of their disks, but not so much for a home PC.

There are a number of incorrect statements made here.

First, there is a RAID 6, but adding an additional disk to a RAID-5 will not give you RAID-6; it’ll give you RAID-5 with n+1 drives. Basically, RAID-6 works like RAID-5, but the parity data is distributed twice. Meaning you get the space of n-2 drives (as opposed to RAID-5’s n-1), but you can recover from two simultaneous drive failures with no data loss, compared to RAID-5’s recovery from a single drive failure with no data loss.

I’m not sure where you got the Rsup[/sup] value, but it’s clearly wrong, especially considering that for R < 1 (which any reliability figure is), Rsup[/sup] < R, meaning your RAID is less reliable than the individual drive would be. If my calculations are correct, the actual reliability of a RAID-5 with n drives of reliability R is R[sup]n[/sup] + nR[sup]n-1/sup. This makes all kinds of assumptions about R being a constant and the time of failure of the drives being independent (both of which are probably not true), so it’s not particularly interesting except as a fun probability problem. The expansion of the reliability of RAID-6 is left as an excercise for the reader (hint: binomial coefficients out to any two drives failing).

I’ve read that a server with a server-class RAID controller doesn’t need NCQ as the controller can do the same thing. True?

Thank you for the answers–they helped immensely. I did a little Googling and it looks like certain newer controllers come with software to grow RAID arrays. So that might be a good option for me.

The only reason I asked about SATA II drives being compatiable with SATA controllers is that NewEgg isn’t selling 500 GB hard drives that are plain old SATA I. But, there’s always the much cheaper option of going with 4 or 6 400 GB drives which would likely be cheaper than 3 500 GB drives at this point, plus I wouldn’t have to worry about growing the RAID array.

(1) Mea culpa! It’s pretty clear that I misunderstand RAID-5; I should have kept my mouth shut. Props to thewalrus for setting me straight. :smack:

(2) You realize that you’re talking about building a home computer with over a terabyte of storage, right? What on earth do you need a terabyte for? :eek:

(3) Just last week I put new values in my price/GB “sweet spot” spreadsheet, and used prices from Newegg. As of 14 September, the best price point was somewhere between 250GB and 300GB, where you’re paying about $0.40/GB. That was for IDE drives, but there should be a similar sweet spot for SATA drives.

I -do- have a need for it. I’ve got 800 GB of storage with approximately 650 of that filled, not including the 250 or so I’ve backed up on other HDs & DVD-Rs.

The “sweet spot” doesn’t count for much if physical space for drives is a limitation. I hope to eventually move those drives over to a smallish case, and you can fit about 6 max in the kind of case I have in mind.

Just to add my experience with home PCs and servers at the hospital

  1. It also depends on how the computer uses the controllers. Some have motherboard BIOS settings, RAID BIOS, or card BIOS settings. Still, you might have software that goes along with the card that will let Windows see the disks in Disk Management but you’ll have to set up as Dynamic Disks in order to get the benefit of RAID.

  2. Yes. If you have hot swappable disks (most SATA and SCSI are) you can plug them in while the OS is running, scan the devices, and use disk management or software to extend the volume or add another logical drive.

  3. Data isn’t stored on controllers.

  4. SATA II

  5. See previous posts.

  6. RAID operations are transparent for the most part. Only when you have to do some kind of hardware/technical work do you get into the nitty gritty of rebuilding volumes, expanding volumes, etc.

On a related note, I have considered some sort of RAID setup for my Linux file server – most likely mirroring. It soulds great that things run transparently (and they should), and I assume that I would use Linux tools to format my one unified RAID drive with ext2 or something like that and go from there.

Until the day that one of the two drives actually fails. How do I find out about this unpleasant event?
I suppose that the card could pop up something during the boot process, but if I leave the machine running untouched for weeks, how will I ever know that one of the drives failed?

If you’re just using raidtools or something similar, you should get a ton of messages in /var/log/messages.