Help fixing corrupted RAID5 partition

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello,
I have a machine running software RAID5 using:
/dev/hde1
/dev/hdg1
/dev/hdi1

as the RAID partitions. The operating system for the machine is on separate 
drive /dev/hda.

After a power failure, RAID failed to start. During boot, it gives me the 
following message:

---
Starting up RAID devices : raid5 : failed to run raid set md0 
Checking filesystems
/boot : clean, ... 
fsck.ext3 : /dev/md0 :
The superblock could not be read or does not describe a correct ext2 
filesystem. If the device is valid and it really contains an ext2 filesystem 
(and not swap or ufs or something else), then the superblock is corrupt, and 
you might try running e2fsck with an alternate superblock :
e2fsck -b 8193 <device> 

Invalid argument while trying to open /dev/md0 	[Failed]
---

Then it asks for root password for maintenance. After inputting root password, 
I tried running:

#> e2fsck -b 8193 /dev/md0

and I got the same message about corrupt superblock. Running 
"raidstart /dev/md0" in maintenance mode also give a message that the raid 
fail to start. Does this mean I loose all my data in the array ? Any hope of 
recovering some ? 

I attach my /etc/raidtab bellow. Any help is greatly appreciated. 

Thanks
Reuben D. Budiardja

------------ /etc/raidtab------------
raiddev		    /dev/md0
raid-level		    5
nr-raid-disks		    3
chunk-size		    64k
persistent-superblock	    1
nr-spare-disks		    0
    device	    /dev/hde1
    raid-disk     0
    device	    /dev/hdg1
    raid-disk     1
    device	    /dev/hdi1
    raid-disk     2


-- 
Reuben D. Budiardja
Dept. Physics and Astronomy
University of Tennessee, Knoxville, TN


[Index of Archives]     [Current Fedora Users]     [Fedora Desktop]     [Fedora SELinux]     [Yosemite News]     [Yosemite Photos]     [KDE Users]     [Fedora Tools]     [Fedora Docs]

  Powered by Linux