I've installed FC6 on an old machine I had. I used two ~1.5GB IDE
drives I had laying around to install the O/S, that was barely enough
space. When I tried running yum update afterwards I didn't have enough
disk.
Seems by default FC6 installs a stand alone boot partition, and then a
single logical volume that spans the physical volume(s) containing the
entire O/S.
The install spanned both /dev/hda, and /dev/hdb with hda1 containing the
boot partition, hda2 the beginning of the install, and hdb1 the rest of
the install. sfdisk -l looked liked like this:
Disk /dev/hda: 3148 cylinders, 16 heads, 63 sectors/track
Warning: The partition table looks like it was made
for C/H/S=*/255/63 (instead of 3148/16/63).
For this listing I'll assume that geometry.
Units = cylinders of 8225280 bytes, blocks of 1024 bytes, counting from 0
Device Boot Start End #cyls #blocks Id System
/dev/hda1 * 0+ 12 13- 104391 83 Linux
/dev/hda2 13 196 184 1477980 8e Linux LVM
/dev/hda3 0 - 0 0 0 Empty
/dev/hda4 0 - 0 0 0 Empty
Disk /dev/hdb: 3308 cylinders, 16 heads, 63 sectors/track
Warning: The partition table looks like it was made
for C/H/S=*/64/63 (instead of 3308/16/63).
For this listing I'll assume that geometry.
Units = cylinders of 2064384 bytes, blocks of 1024 bytes, counting from 0
Device Boot Start End #cyls #blocks Id System
/dev/hdb1 * 0+ 826 827- 1667200+ 8e Linux LVM
/dev/hdb2 0 - 0 0 0 Empty
/dev/hdb3 0 - 0 0 0 Empty
/dev/hdb4 0 - 0 0 0 Empty
... and /etc/fstab looked like this:
/dev/VolGroup00/LogVol00 / ext3 defaults 1 1
LABEL=/boot /boot ext3 defaults 1 2
devpts /dev/pts devpts gid=5,mode=620 0 0
tmpfs /dev/shm tmpfs defaults 0 0
proc /proc proc defaults 0 0
sysfs /sys sysfs defaults 0 0
/dev/VolGroup00/LogVol01 swap swap defaults 0 0
/dev/sda1 /mnt/sata1 ext3 defaults 0 0
/dev/sda2 /mnt/sata2 ext3 defaults 0 0
So, I dug up a good 40GB drive that I had tried to use for the install
originally but didn't want to boot (long story) and popped it in. It
was completely blank and unpartitioned.
Following the instructions at section 13.5.2 of
http://tldp.org/HOWTO/LVM-HOWTO/removeadisk.html I did a pvcreate and a
vgextend against the new drive (hdd), and then a pvmove to move the
contents from hdb.
- /usr/sbin/pvcreate /dev/hdd
- /usr/sbin/vgextend VolGroup00 /dev/hdd
- /usr/sbin/pvmove -v /dev/hdb1 /dev/hdd
Everything went as expected. I then performed a vgreduce to take
hdb out of the volume group:
- /usr/sbin/vgreduce VolGroup00 /dev/hdb1
That seemed to go fine as well. So I tried taking a look to see if I
had accomplished what I thought I had.
system-config-lvm gave me the impression that the data move from hdb1
worked as hdd now shows up, hdb is gone, and the VoLGroup00 appears.
Funny thing is that sfdisk shows:
Disk /dev/hda: 3148 cylinders, 16 heads, 63 sectors/track
Warning: The partition table looks like it was made
for C/H/S=*/255/63 (instead of 3148/16/63).
For this listing I'll assume that geometry.
Units = cylinders of 8225280 bytes, blocks of 1024 bytes, counting from 0
Device Boot Start End #cyls #blocks Id System
/dev/hda1 * 0+ 12 13- 104391 83 Linux
/dev/hda2 13 196 184 1477980 8e Linux LVM
/dev/hda3 0 - 0 0 0 Empty
/dev/hda4 0 - 0 0 0 Empty
Disk /dev/hdb: 3308 cylinders, 16 heads, 63 sectors/track
Warning: The partition table looks like it was made
for C/H/S=*/64/63 (instead of 3308/16/63).
For this listing I'll assume that geometry.
Units = cylinders of 2064384 bytes, blocks of 1024 bytes, counting from 0
Device Boot Start End #cyls #blocks Id System
/dev/hdb1 * 0+ 826 827- 1667200+ 8e Linux LVM
/dev/hdb2 0 - 0 0 0 Empty
/dev/hdb3 0 - 0 0 0 Empty
/dev/hdb4 0 - 0 0 0 Empty
Disk /dev/hdd: 77545 cylinders, 16 heads, 63 sectors/track
sfdisk: ERROR: sector 0 does not have an msdos signature
/dev/hdd: unrecognized partition table type
No partitions found
The error was a little surprising since I read that this could be done
on an unpartitioned drive. Also df -h shows:
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/VolGroup00-LogVol00
2.0G 1.6G 263M 87% /
/dev/hda1 99M 11M 83M 12% /boot
tmpfs 236M 0 236M 0% /dev/shm
Which seems to reflect the absence of hdb and hdb1's content pvmoved
over to hdd.
Since I didn't see any extra space available I did a:
/usr/sbin/lvextend -L -1G /dev/VolGroup00/LogVol00
This apparently did nothing. So now I have confusion and questions:
- Can I REALLY do a pvmove on a 'new' unpartioned drive?
- Why is hdd showing an error with sfdisk -l ?
- lvextend didn't seem to work did I miss something?
- Have I made a complete mess and if so is there a way to recover
without reinstalling?
- If I haven't how do I allocate additional space dynamically from hdd
to the volgroup so I can yum update?
My box is still alive so I haven't rebooted yet just in case I horked
something up. Any comments would be welcome and likely educational.
Thanks in advance.