Re: lvm over raid confusion

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Clyde E. Kunkel <clydekunkel7734 <at> cox.net> writes:

> > Maybe I can separate the VGs so that the RAID array isn't part of VolGroup00.
> > 
> 
> Try this:  after your ctrl-alt-F2 and start md0, do a vgscan, then 
> vgchange -ay --ignorelockingfailure.  Make sure the LVs look right. 
> Then, before you chroot, bind mount sys, dev and proc from the FC5 
> installed system.  When doing the mkinitrd, include the raid1 module 
> just to be sure...dunno...I have done similar and WFM.  Good luck!
> 

Well, for all those playing along at home: problem solved. Thank you all for
your advice, esp. Paul. For posterity's sake, here's roughly what I did (the
command syntax below is recreated from memory; don't take my word for it):
As above, boot off the rescue CD, ctrl-alt-f2 before bringing up the linux
partitions, mdadm -AR /dev/md0 /dev/hdb /dev/hdc, then flip back to rescue UI,
scan for linux partitions (which now succeeds).
chroot /mnt/sysimage
mkinitrd -v /boot/initrd-2.6.... (make sure raid modules are loaded)
pvmove /dev/md0 VolGroup00
pvchange -x n /dev/md0
pvremove /dev/md0
Edit /etc/lvm/lvm.conf to comment out removed PV (saving a backup, of course).
vgcfgrestore /etc/lvm/lvm.conf # Make sure the metadata reflects PV removal.
reboot, and voila! The box boots, doesn't complain about lack of PV.

To test, I created a separate VG, and added /dev/md0 to it... The system boots
fine and mounts the new VG's LVs.

Once again, thanks guys, you were a lot of help.


[Index of Archives]     [Current Fedora Users]     [Fedora Desktop]     [Fedora SELinux]     [Yosemite News]     [Yosemite Photos]     [KDE Users]     [Fedora Tools]     [Fedora Docs]

  Powered by Linux