FC4 RAID5 failed

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I got a strange problem with FC4 software RAID5.

I have 2 RAID5s in my FC4 box. One contain 6 partitions /dev/hd[cdefgh]1, another contain 8 partitions /dev/hd[ab]3 + /dev/hd[cdefgh]2. They all worked fine before.

After I replaced one failed disk, a strange problem is happened. I removed the failed disk, and added new one. Syncing was ok, and was finished after hours. /proc/mdstat was also normal. But after I reboot the linux box. The 2 RAID5s are all in downgrade mode again! The new disk was kicked out! I never met such problem before with RH9. I tried many times, the result were all same. I can manually stop/start RAIDs, SuperBlocks and /proc/mdstat are all in good condition. But whenever I reboot, the new disk will be kicked out again. I can guarantee the new disk is good. In /var/log/message, I didn't see any error message during shutdown. And during the boot procedure, the RAID start procedure even didn't check the failed disk.

 

Does anybody ever meet the same problem?








你 知 道 2006 年 最 火 爆 的 电 子 邮 箱 吗 ?
多 达 1.1 亿 中 国 人 选 择 网 易 2000 兆 免 费 邮 箱 , 抢 注 再 得 280 兆 超 大 网 盘

[Index of Archives]     [Current Fedora Users]     [Fedora Desktop]     [Fedora SELinux]     [Yosemite News]     [Yosemite Photos]     [KDE Users]     [Fedora Tools]     [Fedora Docs]

  Powered by Linux