What happens to a RAID 0 array if some of the constituent drives are unavailable?

07
2014-07
  • user1305850

    If I have three drives in a RAID 0 array containing a file system, and I happen to disconnect two of the drives, then what would I see on the only connected drive? Would I see nothing? Would I get an error?

    I am running Mac OS X Mavericks on a custom built hackintosh system.

  • Answers
  • Michael Kjörling

    If you use actual RAID 0 (striping without redundancy), then if any one of the drives fail, is disconnected, etc., the entire array fails. You won't be able to access any of your data in such a scenario. The OS is unlikely to recognize the remaining drives as having an identifiable file system; I don't know exactly how OS X handles that scenario, and it probably depends on what you've put on the array too, but that's largely irrelevant at that point.

    It might be possible to recover some of your data (the data which happened to be stored on the still-functional drive), but even that depends very much on the specifics of the RAID implementation and those of the data.

    That is why, except for specialized purposes, RAID 0 is often a very bad idea. It mainly gives you speed gains, but it comes at a relatively high cost in terms of risk of loss of data. Usually, one does RAID 1+0 (mirroring, then striping) or perhaps 0+1 (striping, then mirroring) rather than pure RAID 0. Pure RAID 0 is mainly useful for transient data where speed is the most important consideration.

    Also note that depending on the software (even if it's in firmware), it might not be trivial to grow even a RAID 0 array after it has been created.

    Just how much data are you dealing with, and what alternatives are available to you? For example, might four large drives in a RAID 5 (striping with single parity) configuration be practical? Four 2 TB drives in RAID 5 will give you 6 TB usable storage space and although you'd want to replace a failed drive as soon as possible will survive the loss of any one drive with no immediate harm to the data. If you have a full backup to elsewhere and can live with the downtime needed to restore from backup if a second drive fails before you have had a chance to rebuild (resilver) the array, I think you'd be okay with single parity. If you're paranoid, go for double parity, but it'll cost you another drive's worth of storage capacity.

    It also sounds like what you are really after isn't so much RAID as it is a practical approach to volume management, so that you can grow your storage solution as your needs change. If that's so, you may actually want to have a look at ZFS, which is in essence a combined volume manager and enterprise-grade file system that allows you to relatively easily grow your storage solution piecemeal. It might not be a practical solution for your system disk, but for your data storage needs it just might be what you are looking for. If this is what you are actually after, I would strongly suggest posting that as a question, as it really is only peripherally related to RAID.

    And of course, the obligatory comment on the subject: regardless of level, RAID is not a backup.


  • Related Question

    windows - Raid 0 - what happens with the data when hdds plugged into another motherboard
  • Questioner

    I am buying a new motherboard Asus M4A785TD-V EVO to replace the Asus A8N32 which has recently went dead.

    The original motherboard has a NVIDIA NVRaid controller which I was using to strip (raid-0) 2 hdds. The new motherboard seems to have AMD raid controller, which I suspect is not compatible with a NVRAID.

    Is there any chance - I will be able to see the data after I have these 2 disks connected to new motherboard?


  • Related Answers
  • David Spillett

    There is very little chance that the new controller will recognise a RAID array setup by the old. Sometimes even switching between controllers from the same manufacturer or even the same range can cause the array to not be recognised.

    This is one of the reasons I recommend against using the RAID support found on motherboards and cheap I/O cards (the other reason being that that are usually "fake RAID", combining the worst points of hardware and software RAID in one ugly package).

    If you use Linux's software RAID then you can just transplant the drives to another machine and more often than not (much more often than not) it will work pretty much automatically. Sometimes a little manual intervention is needed (for instance: if the device node configured conflicts with an existing array in the new machine you need to manually reassemble the array, but this is done with a single mdadm command). I'm guessing other OSs' software RAID solutions will transport between machines as easily, though Linux is the only one I know of for sure as I've performed the operation myself a number of times.

    If you don't want to use software RAID then get a good controller card, then you can transplant the controller with the drives or get a new controller of the same type (this doesn't work with cheap RAID controllers because, as with wireless NICs, they sometimes silently switch controller chips between revisions so even though the box has the same part number they aren't necessarily compatible). Good hardware RAID's advantages over software RAID and "fake RAID" are better performance in some cases, complete OS independence, and (if you pay good money) safety features like battery backed cache/buffers.

  • Russ Warren

    It's 99% likely that the new RAID controller will not recognize your current setup. Plan for the worst -- backup your data and rebuild the array on the new controller.