roland wrote:
On Sat, 11 Oct 2008 23:56:34 +0200, Bill Davidsen <davidsen@xxxxxxx> wrote:
roland wrote:
Hello,
I have a server running on RAID5, on which I would like to add a
4th disk, with a seperated filesystem. I need to have some workspace.
The same disk could be used to replace a failing disk.
This is a spare disk, and can be used only as such. I'm not sure what
you had in mind by "seperated filesystem," but you can't install
another drive such that Linux will use the drive in two ways at once.
If that was your intent it pretty much isn't going to work, and if the
spare was deployed all data on it would be lost.
Stuart Sears explaned me howto install this disk as a hot spare, but
it looks to me a little bit to dangereous. This server is serving 20
workstations, so I need to do this without taking to many risks. I
admit to be a novice on the matter of diskmanagement. I always do
standard installation of a filesystem.
Assuming you just want to add a spare, something like this works fine:
mdadm --add /dev/md0 /dev/sdf1
Then a cat of /proc/mdstat should show the device added, running as a
spare. I did that, then failed one of the existing drives, and the
array rebuilt on the spare just fine, and I got a message from monitor
mode saying it happened and was fixed.
I have to mention that the raid5 was build with the IBM utilities on
this server x3500. So I suppose this mdadm wouldn't work. Or am I wrong?
Secondly, I need extra space only as a workarea.
That's why I thought, maybe I can:
- insert a 4th disk and add a filesystem, that isn't part of the raid5.
or
- insert a 4th and 5th disk and configure it as a raid1, to be used for
non critical applications
In case of failure of the raid5, I could could use the 4th disk to
replace the failing disk of the raid5 and let the ibm utilities rebuild
the raid5
Does this makes sense?
Well, I understand it now. ;-)
Ask IBM how to do that, you are well out of anything related to the distribution
and into the hardware specific area. I'd love to help you, but the only IBM RAID
controller I have ever used as anything but a box of drives was the "ServeRAID"
unit used in rack mounted LVD SCSI server systems. Love the controller, but it's
pretty uncommon.
I would say you can easily use other drives outside the RAID array, adding them
is a *very* specific issue with not only the controller but the firmware
version. Even if I was still running servers for an ISP I would be asking IBM
for docs before trying anything fancy. Most IBM stuff will run without the bad
drive, send you a message, and either power down the drive or blink the light on
the bad one, so you can pull the dead drive, plug in a spare in hot-swap
carrier, and it will recover by magic.
Ask IBM about this.
So if someone could tell me howto do this in a simple way. I searched
the internet for a simple howto, but was unsuccessful. I have to do
this at a client in France, few hours driving
Could someone keep me from having a sleepless night?
I would test the existing array before counting on it to be okay, just
in case a bad spot developed.See the wiki
(http://linux-raid.osdl.org). Oh, and there is a linux-raid mailing
list where there are lots of us who have found problems the hard way
and want to share.
--
Bill Davidsen <davidsen@xxxxxxx>
"We have more to fear from the bungling of the incompetent than from
the machinations of the wicked." - from Slashdot
--
fedora-list mailing list
fedora-list@xxxxxxxxxx
To unsubscribe: https://www.redhat.com/mailman/listinfo/fedora-list
Guidelines: http://fedoraproject.org/wiki/Communicate/MailingListGuidelines