David G. Miller wrote:
Ubence Quevedo <r0d3nt@xxxxxxxxxxx> wrote:
Hi Everyone,
I am redoing my system and have three 250GB hard drives that I was
going to try and setup a raid 5 configuration with. My motherboard
is a ASUS K8N4E-Deluxe with both nforce raid and silicon image
raid. My question is, would it be best to go with either of the
"hardware" raid solutions [I know it isn't truly hardware raid], or
should I use Fedora's built in software raid for what I want to
accomplish. I can't afford an LSI or 3Ware raid card, or else I
wouldn't be posting this question.
Has anyone had any problems/horror stories with these particular
hardware raid solutions I have mentioned? On the other hand, has
anyone had any particularly bad experience with the Fedora software
raid?
I haven't had a chance to look through the mailing list to see if
anyone has asked a similar question, so please don't flame me!
Thanx, and I look forward to your responses!
-Ubence
To answer your question with a question, why do you want RAID? Are
you looking for a system that can reboot even with a failed drive,
more speed, or just security against hardware failure? I run Linux
software RAID-1 on my server and I can power down, pull a ribbon cable
off of a drive and then power up with no problems. Chances are the
built-in RAID will give you redundancy if that's all you need. It may
or may not let you autonomously boot with a failed drive.
Perhaps someone else can chime in as to whether the hardware RAID you
have is any faster than Linux software RAID.
Cheers,
Dave
Hi Dave,
My experience with RAID in both hardware and software under linux for
speed is that it depends on the context of what you're doing.
For your average user, there is no difference and you've just blown x
hundred dollars on a piece of hardware when software will do the job
just fine. The software does take some overhead on the CPU, but given
most modern CPUs this is practically nothing.
Hardware solutions can offer the following benefits over software;
dedicated RAM for caching, battery backup on card for RAM, diagnostic
utilities and RAID management directly through a BIOS.
The dedicated RAM solution on hardware does make things faster, as it
writes to the RAM, then reports success up the IO tree, the controller
then writes to the disks when it can. However, this benefit only makes
itself apparent when you're really thrashing the IO, especially serving
lots of little files and doing writes too.
RAID 0 - striping, two disks, add them together to get the container
size. With just two disks, no failover, any disk fails and your
container is toast. You can implement 0+1, which is striping plus
mirroring, but then you need four disks.
RAID 1 - mirroring, two disks, one is usually master, other is slave and
gets a copy of everything that does on the master, any disk fails you
keep on ticking and rebuild once you get a replacement.
RAID 5 - minimum of three disks, two data, one for parity (end up with
size roughly equal to two disks), however can span as many disks as need
and make very large containers. One disk can fail and your container
still works, but in degraded mode, get the replacement and rebuild on
the fly
RAID 6 - less used, but like 5, but handles more than a single disk failure.