Karl Larsen wrote:
The tricky parts are dealing with what happens if you make duplicate
filesystem labels and making the new drive bootable. But those can be
fixed with a rescue-mode boot.
Well guys I just set up my first raid-1 system. Here is what it said:
[root@k5di etc]# cat /proc/mdstat
Personalities : [raid1]
md5 : active raid1 sda5[0]
5124608 blocks [2/1] [U_]
unused devices: <none>
[root@k5di etc]#
I did this over lunch and it took several fdisk efforts lots of root
terminal things like mkfs and cp -a and such but now on this computer is
all of f7 /dev/sdb5 and on the other hard drive I have /home at
/dev/sda5 through the raid 1 system. It appears to be working fine and
to do the whole thing requires more repeated things and another step to
get grub happy :-)
It can be harder than this, though. Consider what happens after you have
been doing this for a while and are re-using disks that already have
auto-detect md devices on them and/or filesystem labels that may
conflict with ones you are using. Some of the quirkier disk controllers
can also map a volume into the position where it was configured, even if
you move it or move to a different machine. You might pull a disk from
the 2nd position on one machine, move it to the first position on a
different machine and add an unconfigured disk in the 2nd position and
have the 2nd drive come up as /dev/sda.
But, as long as your new drive hasn't been used, an 'fdisk -l' will show
you which does not have partitions.
--
Les Mikesell
lesmikesell@xxxxxxxxx