> So considering that, what do you gain from dedicated hardware for RAID? > You get a commercially supported RAID software and hardware package, and > you get to unload a bit of CPU from the main system. The big thing it saves you on in RAID 1 & 5 is memory bandwidth, and in RAID5 doubly so for the XOR costs. The second thing it helps with is bus bandwidth as each chunk of data crosses the PCI(X) bus once. In the PCI world that really helped, PCI-X it's less clear. The last benefit is a battery backed cache. > Considering that the CPU on the card at max performance is probably 1/3 > of a core from a modern CPU, then that is not really much of a savings. > > The real consideration for RAID 5 is survival. In either situation you > have to have a spare drive, and you have to consider availability of new > drives to match them in the future. Bigger ones will do Alan
Also keep in mind that doing RAID 5 in software on the main CPU leaves you open to a stripe corruption if you lose power or have a system crash. Whenever you update a block in a stripe, you have to read the whole stripe, recalculate the parity block, and write both it and the changed block back to disk. Those two writes are not atomic, and a power failure or crash at the wrong point in time leaves your stripe in an inconsistent state.
If you do the RAID-5 in your disk controller, it can atomically log both disk blocks to some sort of nonvolatile memory and ensure that both are written correctly to disk even if the system reboots between the two writes.
You can get around this by doing some sort of journalling to declare your intent to update the stripe, but I've never heard of LVM doing this.
I keep asking for someone to contradict me on this point and say that LVM's RAID-5 is smart enough to handle unexpected reboots, but so far nobody has.
Wayne.
-- fedora-list mailing list fedora-list@xxxxxxxxxx To unsubscribe: https://www.redhat.com/mailman/listinfo/fedora-list Guidelines: http://fedoraproject.org/wiki/Communicate/MailingListGuidelines