Newbie Borked His RAID Filesystem?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




Hi all,

Sorry if this is a repeat post. I tried posting this a few different ways, with little luck.

I was trying to put together a fileserver using a 6-disc RAID-5 array and Fedora 8. I haven't touched Linux before, but I managed to bungle through the install/setup leaning heavily on Google. Disc /dev/sda held the OS, while /dev/sdb-sdg were the RAID discs. During the RAID setup, /dev/sdd failed, at which point I purchased a replacement, and - on a whim - an additional disc to add to the array.

The new discs showed up as /dev/sdd and /dev/sdh. I originally added them straight into the array using mdadm --add /dev/md0 /dev/sdd /dev/sdh. At this point, /dev/sdd replaced the failed disc, and /dev/sdh was added as a spare. BUT ... I remembered that I screwed up and forgot to add a partition to either of the discs. So I --failed and --removed both discs, used fdisk to add a "linux raid autodetect" partition on both. I then added thosepartitions back into the array using mdadm --add /dev/md0 /dev/sdd1 /dev/sdh1 (using the partitions this time.)

>From there, both discs reappeared as before, with /dev/sdd1 taking its rightful place in the array, and /dev/sdh1 as a spare. I then started the the reshape with mdadm --grow /dev/md0 --raid-devices=7. And agonizingly-long week later, the grow finished. I was going to finish off by checking the md0 filesystem with fsck.ext3 and a resize2fs, but I figured I could do that in the morning, and the machine needed a break. I shut the system down, knowing everything was good.

Until I rebooted.

Upon starting up, the kernel now complains about an invalid argument while trying to open /dev/md0. If I  go into "maintenance" mode try to reassemble using mdadm --assemble -v, I get a bunch of messages for /dev/sd[bcefg] stating that they have no superblocks, and they have the wrong uuids. Oddly, mdadm finds superblocks on /dev/sdd and /dev/sdh, which were the "replacement" discs.

I get the feeling I screwed up in two ways:


a) By not creating a partition right off the bat before adding /devs/sdd and sdh to the array. Neither /dev/sdd1 or /dev/sdh1 show up in the /dev directory now. Only /dev/sdd and /dev/sdh are visible. Oddly, fdisk -l *does* show the raid autodetect partitions on both, and fdisk has no problems verifying the partitions. Does the "boot" flag need to be set on these discs? The other five drives in the array have that flag set, but sdd1 sdh1 don't have that flag active.


b) I probably should have updated mdadm.conf, which I neglected to do after the --grow. It currently lists only 6 drives as being in the array, not seven. Can mdadm --scan recover the order of the discs in the array? That said, I couldn't even update mdadm.conf if I knew what to do, as the "maintenace mode" seems to have mounted the rest of my filesystem in read-only mode. :(

Does anyone know a way out of this? I get the feeling if I can get /dev/sdd1 and /dev/sdh1 to show up again, I might be able to force a reassemble, but I'm not sure how. If I can get the array to re-assemble, how can I dump the results of mdadm --scan --verobse to mdadm.conf if the system is in write-only mode?

Thanks in advance!
Tim
This is an e-mail from General Dynamics Land Systems. It is for the intended recipient only and may contain confidential and privileged information. No one else may read, print, store, copy, forward or act in reliance on it or its attachments. If you are not the intended recipient, please return this message to the sender and delete the message and any attachments from your computer. Your cooperation is appreciated.

[Index of Archives]     [Current Fedora Users]     [Fedora Desktop]     [Fedora SELinux]     [Yosemite News]     [Yosemite Photos]     [KDE Users]     [Fedora Tools]     [Fedora Docs]

  Powered by Linux