Re: How to resize RAID-1 partitions (mdraid)

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Sam Varshavchik wrote:
> Roberto Ragusa writes:
> 
>> I never used parted, so if I had to achieve what you want I'd try
>> something
>> completely different.
> 
> You have to use something to adjust the size of the ext3 filesystem.
> Growing the underlying mdraid device is not sufficient. Once the
> underlying block device is larger, the ext3 filesystem needs to be
> grown, else it'll remain at its logical size, with no benefit.
> 
> I don't think that mdadm --grow adjusts the ext3 metadata on the
> partition to reflect its new size. Anything can be on a RAID block
> device, not just ext3. Something still needs to logically resize the
> ext3 filesystem.

Right. Partition expansion and filesystem expansion are two different
things.

>> Boot with some sort of rescue disk so you are not running from the disks.
>>
>> Break the RAID eliminating the sdb partitions to be modified:
>> (md1 and md2 is my guess at the names)
>>
>> mdadm /dev/md1 --fail /dev/sdb1
>> mdadm /dev/md2 --fail /dev/sdb2
>> mdadm /dev/md1 --remove /dev/sdb1
>> mdadm /dev/md2 --remove /dev/sdb2
>> mdadm /dev/md1 --grow -n 1
>> mdadm /dev/md2 --grow -n 1
> 
> Why do you need to --grow here?

If you do not grow to 1, you have a degraded 2disk-running-on-1.
It should not be a big problem, but, for example, the Fedora rescue
disks refuse assembling degraded RAIDs IIRC. A degraded RAID
is also implicitely "searching" the missing disk when it
gets assembled, and we do not want to try to aquire the sdb partitions.
So, maybe not necessary, but I would specify "you only have one disk"
explicitly.

>> So you now have md1 and md2 are a 1-disk RAID1, while md5 is still
>> running
>> on two disks.
>>
>> You now fdisk /dev/sdb: delete all the partitions and immediately
>> recreate
>> them with the new sizes. Be sure that sdb4 and sdb5 are recreated with
>> _exactly_
>> the same positions they had before. Do not use "fd" as type for sdb1
>> and sdb2,
> 
> Why not?

This is another safety step to avoid that the new partitions get involved
with the md1 and md2 RAID. You understand that sdb5 (after destruction and
recreation) is still magically inside md5 because it happens to have type
fd and all the right metadata in its superblock. We do not want the
same thing to happen for sdb1 and sdb2, so we avoid fd. Consider that
sdb1 could be recaptured by md1 if the superblock is at the beginning; in
that case it would be a big partition in small RAID1 and it will interfere
with your successive steps (md11 creation will fail).
Another bad thing it can happen is that the kernel sees two different
md1 RAIDs: in that case it will rename one to something else (md1_1 IIRC).
Which one will be renamed depends on disk scanning order and could even
change from one boot to another; you will find yourself in a mess and
making a mistake becomes easy.

>> for now. Now save the changes. The kernel will refuse to load the new
>> partitions
>> in memory as your md5 is still using sda5 and sdb5.
>>
>> So you reboot.
>>
>> Now you have again md1 (degraded) md2 (degraded) and md5 (2 disks!)
>> available.
>> Create two new degraded RAID devices:
>>
>> mdadm --create /dev/md11 -l 1 -n 1 /dev/sdb1
>> mdadm --create /dev/md12 -l 1 -n 1 /dev/sdb2
>>
>> (mdadm will also want "-f" because n=1 is unusal)
>>
>> Now you can mkfs /dev/md11 and /dev/md12 and cp from /dev/md1 and
>> /dev/md2
> 
> I see what you're getting at here. I'm wondering whether it's even
> necessary to create temporary md devices and manually copy over. Can't I
> just create the partitions on the second drive, in their new position,
> add them to my existing RAID sets, and let the kernel sync up the data,
> then break the RAID volumes again, rebuild the partitions on the first
> drives, add them back into the RAID, sync the data back up to the
> primary, "--grow max", now that the first partition is larger on both
> drives, to have mdraid grow the RAID volume to its new bigger size, then
> finally parted to grow the ext3 filesystem on the larger RAID volume.

Yes, it should be OK.
I'm not used to manage asymmetric RAID1s, but it is doable.
I would just use resize2fs instead of parted, because of my lack
of familiarity with parted (I want to resize an ext2 fs, so I use
resize2fs :-) ).

My approach is overkill for you; if you wanted md1 and md2 to become
3 volumes or merge into 1 you would need the cp, but your case is simpler.

> Then I have another server that does not have a luxury of a larger swap
> partition that can be conveniently shrunk. But I think I can see how
> that can be done using documented parted abilities: shrink the large
> ext3 partition using parted. Shrink its RAID device using mdadm --grow.
> Degrade the RAID by taking out the partition from one of the drives.
> Delete the physical partition and recreate it its new position. Add it
> back into the RAID volume, let the kernel sync it, then repeat the same
> process with the partition on the other drive.

The RAID breaking trick is really nice.
You can also do most of this in a running system. What usually
stops the show is when you repartition and the kernel does not reload
the partition table. It happened recently to me in situations without
justification (no mounts, no RAID parts, no pv created....).

You just have to be careful to not mess things because of partition
table reloading (or missed reloading) and automatic RAID assemblies (which
could change names and confuse you).

> Now, I just need to understand why you say not to use the fd partition
> type.

It was just a temporary step to keep the new partitions away from the RAID
assemblies.

Anyway, I think that your proposal to avoid md11 and md12 is better.


-- 
   Roberto Ragusa    mail at robertoragusa.it
-- 
users mailing list
users@xxxxxxxxxxxxxxxxxxxxxxx
To unsubscribe or change subscription options:
https://admin.fedoraproject.org/mailman/listinfo/users
Guidelines: http://fedoraproject.org/wiki/Mailing_list_guidelines


[Index of Archives]     [Current Fedora Users]     [Fedora Desktop]     [Fedora SELinux]     [Yosemite News]     [Yosemite Photos]     [KDE Users]     [Fedora Tools]     [Fedora Docs]

  Powered by Linux