On Tue, Nov 15, 2005 at 03:50:00PM +0000, Craig McLean wrote: > Gotcha. Hopefully once the filesystem is resized down and the lvreduce > is done, moving the physical partition boundary will not destroy any > allocated PEs, good call on the vgcfgbackup/restore. I wouldn't know > where to start working out the new PE count for the restore though... > Will vgdisplay show me the new PE count after the resize? Use "lvdisplay -m" to get a map of the segments; e.g., one of my logical volumes shows: --- Logical volume --- LV Name /dev/extra/extra_disk VG Name extra LV UUID cTvMfD-XTIT-NQhs-vsnO-sgDq-ltXA-jVf0Pg LV Write Access read/write LV Status available # open 1 LV Size 56.00 GB Current LE 56 Segments 2 Allocation inherit Read ahead sectors 0 Block device 253:1 --- Segments --- Logical extent 0 to 7: Type linear Physical volume /dev/md7 Physical extents 130 to 137 Logical extent 8 to 55: Type linear Physical volume /dev/md7 Physical extents 144 to 191 This corresponds to the following metadata in the backup file created by vgcfgbackup; I've elided the other logical volumes for brevity: extra { id = "vh30BJ-06Yt-uJTS-O5S4-ZyZc-r9vQ-Hfd7IU" seqno = 7 status = ["RESIZEABLE", "READ", "WRITE"] extent_size = 2097152 # 1024 Megabytes max_lv = 0 max_pv = 0 physical_volumes { pv0 { id = "jNBJ7Y-wZ4r-IPyn-KLkP-xjdU-NRXu-f2yUBu" device = "/dev/md7" # Hint only status = ["ALLOCATABLE"] pe_start = 384 pe_count = 224 # 224 Gigabytes } } logical_volumes { extra_disk { id = "cTvMfD-XTIT-NQhs-vsnO-sgDq-ltXA-jVf0Pg" status = ["READ", "WRITE", "VISIBLE"] segment_count = 2 segment1 { start_extent = 0 extent_count = 8 # 8 Gigabytes type = "striped" stripe_count = 1 # linear stripes = [ "pv0", 130 ] } segment2 { start_extent = 8 extent_count = 48 # 48 Gigabytes type = "striped" stripe_count = 1 # linear stripes = [ "pv0", 144 ] } } } } So to hack this up manually, you'd have to first verify that no segment extends beyond the new end of the physical volume, and then reduce the pe_count in the meta-data. Best of luck, Bill Rugolsky