Re: FC4 RAID5 failed

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Sat, 2006-01-28 at 16:05 +0800, 毛睿 wrote:
> I got a strange problem with FC4 software RAID5.
> 
> I have 2 RAID5s in my FC4 box. One contain 6
> partitions /dev/hd[cdefgh]1, another contain 8 partitions /dev/hd[ab]3
> + /dev/hd[cdefgh]2. They all worked fine before.
> 
> After I replaced one failed disk, a strange problem is happened. I
> removed the failed disk, and added new one. Syncing was ok, and was
> finished after hours. /proc/mdstat was also normal. But after I reboot
> the linux box. The 2 RAID5s are all in downgrade mode again! The new
> disk was kicked out! I never met such problem before with RH9. I tried
> many times, the result were all same. I can manually
> stop/start RAIDs, SuperBlocks and /proc/mdstat are all in good
> condition. But whenever I reboot, the new disk will be kicked out
> again. I can guarantee the new disk is good. In /var/log/message, I
> didn't see any error message during shutdown. And during the boot
> procedure, the RAID start procedure even didn't check the failed disk.
> 
>  
> 
> Does anybody ever meet the same problem?

Two things.
When you re-added the new disk and partitioned it, did you remember to
set the partition to fb (RAID auto detect)?
Second, if you did, what did the kernel log had to say about it?
You might want to try to boot from the rescue CD and start the RAIDs
manually (mdadm --assemble) in-order to get a meaningful error
description.

Gilboa


[Index of Archives]     [Current Fedora Users]     [Fedora Desktop]     [Fedora SELinux]     [Yosemite News]     [Yosemite Photos]     [KDE Users]     [Fedora Tools]     [Fedora Docs]

  Powered by Linux