I think I may have just painted myself into a corner. Could someone who
knows LVM and software RAID look this over?
I installed FC5 with the default partitioning settings (/boot + LVM
partition containing LVs for / and /home) on /dev/hda. Then I added two more
drives to the box, and did
# mdadm --create /dev/md0 -l raid1 -N 2 /dev/hdb /dev/hdc
Then I used system-config-lvm to join /dev/md0 to the existing volume group,
and moved the extent containing /home to reside on /dev/md0. The idea was
that I want my data on redundant disks, but I don't care about the OS
itself, because that can be trivially reinstalled. Everything worked just
fine.
Then I rebooted, and the kernel halted on boot ...
Redhat nash starting
Couldn't find device with uuid ...
Couldn't find all physical volumes for volume group VolGroup00
I can boot off a rescue CD, and do
# mdadm -A /dev/md0 /dev/hdb /dev/hdc
mdadm: /dev/md0 has been started with 2 drives
After that, I can see via lvm that all the VGs and LVs are there.
So it seems like /dev/md0 isn't getting assembled on boot up. I tried
following the mdadm man page and had grub pass the kernel the parameter
md=0,/dev/hdb,/dev/hdc , but that did absolutely nothing.
So, basically, I have two questions. First, how do I go about diagnosing
this and getting the box to boot up right?
Second, am I being an idiot in the first place by making my VG contain a
single drive with the OS and a software RAID1 array with /home? That is, if
the non-redundant OS drive fails, will I still be able to plop the OS on a
new drive, and join the RAID1 array (or even just one of the two drives in
the array) to the new drive's VG?