F9 install onto LVM on RAID1 on RAID5

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello list,

I am trying to install a new F9 system onto a RAID/LVM setup.

As anaconda doesn't let me create the RAID/LVM configuration I require,
I created these devices using mdadm and lvm in the shell during
installation.  I then created the filesystems and swap space, with
labels, in the LVM volume group and I can mount them, read, write, etc..
So all is well with the underlying setup.

However, returning to the install's "custom layout" partitioning page,
anaconda displays the volume group and the names and sizes of the
members are correct, but in the "TYPE" colume it indicates "foreign" and
the mount point, and other fields are empty.

If I click "LVM" or highlight one of the members and click "Edit" (to
set the mount points and formatting options), anaconda responds with:

--------------
Not enough physical volumes 
(...)
Create a partition or RAID array of type "physical volume (LVM)"
and click "LVM" again.
--------------

Consequently, I cannot edit the member details to set mount points and
formatting options and continue with the installation.

How does anaconda determine the "type" of a RAID array; do md devices
have types (as partitions do)?  How can I satisfy it there are indeed
physical volumes for an LVM (and that I have already configured them)?

Alternately, how can I definitely tell anaconda to simply skip all
partitioning and let me tell it which /dev entries to use for whichever
partitions?


Details
=======

There are four disks in the system, I will add three more.  The
intention is the have a mirror three disks, for six total, plus one
spare on one half.  So:

md0 is a raid1 (mirror) of four small partitions of each disk.
  mdadm --create /dev/md0 --level=raid1 --raid-devices=4 /dev/sd[abcd]1

md1 is a raid5 of the remaining portion of three disks plus a spare.
  mdadm --create /dev/md1 --level=raid5 --raid-devices=3 --spare-devices=1 --assume-clean /dev/sd[abcd]2

md4 is a raid1 (mirror), degraded because disks are currently missing,
of md0 and "missing".
  mdadm --create /dev/md4 --level=raid1 --raid-devices=2 /dev/md0 missing

md5 is a raid1 (mirror), degraded because disks are currently missing,
of md1 and "missing".
  mdadm --create /dev/md5 --level=raid1 --raid-devices=2 /dev/md1 missing

I configured my lvm volumes on md5, and as I mentioned, anaconda does
see the members.

  lvm> pvcreate /dev/md5
  lvm> vgcreate -s 32m vg0 /dev/md5
  lvm> lvcreate -L 1024m -n root vg0 ; ...
  lvm> vgchange -a y vg0


Aside, anaconda displays md0 and md1 in the list of RAID volumes (both
as type "foreign"), but *not* md4 and md5 -- even though they are just
normal mirror RAIDs.  Is this because they are degraded?

I suspect anaconda lists the lvm members because it notices which vg's
are active.  It doesn't believe md5 contains a physical volume suitable
for LVM use.  (In fact, I don't think anaconda believes there any
physical volumes for lvm on the system at all and, as above, it doesn't
show md5 at all.)

If I cannot get anaconda to cooperate, I'll install onto a raid5 array
on temporary disks, then move the entire system into the proper
nested-RAID5/RAID1/LVM setup.

Thanks for suggestions.


Regards,
Richard

-- 
fedora-list mailing list
fedora-list@xxxxxxxxxx
To unsubscribe: https://www.redhat.com/mailman/listinfo/fedora-list

[Index of Archives]     [Current Fedora Users]     [Fedora Desktop]     [Fedora SELinux]     [Yosemite News]     [Yosemite Photos]     [KDE Users]     [Fedora Tools]     [Fedora Docs]

  Powered by Linux