modifying degraded raid 1 then re-adding other members is bad

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Assume I have a fully-functional raid 1 between two disks, one
hot-pluggable and the other fixed.

If I unplug the hot-pluggable disk and reboot, the array will come up
degraded, as intended.

If I then modify a lot of the data in the raid device (say it's my
root fs and I'm running daily Fedora development updates :-), which
modifies only the fixed disk, and then plug the hot-pluggable disk in
and re-add its members, it appears that it comes up without resyncing
and, well, major filesystem corruption ensues.

Is this a known issue, or should I try to gather more info about it?

This happened with 2.6.18rc3-git[367] (not sure which), plus Fedora
development patches.

-- 
Alexandre Oliva         http://www.lsd.ic.unicamp.br/~oliva/
Secretary for FSF Latin America        http://www.fsfla.org/
Red Hat Compiler Engineer   aoliva@{redhat.com, gcc.gnu.org}
Free Software Evangelist  oliva@{lsd.ic.unicamp.br, gnu.org}
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [email protected]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

[Index of Archives]     [Kernel Newbies]     [Netfilter]     [Bugtraq]     [Photo]     [Stuff]     [Gimp]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Video 4 Linux]     [Linux for the blind]     [Linux Resources]
  Powered by Linux