The RAID was constructed under FC1 with this in /etc/raidtab:
raiddev /dev/md2
raid-level 5
nr-raid-disks 4
chunk-size 64
persistent-superblock 1
nr-spare-disks 0
device /dev/hda4
raid-disk 0
device /dev/hdb4
raid-disk 1
device /dev/hdc1
raid-disk 2
device /dev/hdd1
raid-disk 3
ed
On Fri, 23 Sep 2005, Gilboa Davara wrote:
How many drives were actually members of the raid?
3 + 1 or 2 + 1 + 1?
Please post the raid configuration.
Gilboa
On Thu, 2005-09-22 at 12:46 -0500, Ed K. wrote:
I had a computer running FC1 for a very long while. All 4 ide drives are a
part of a Raid-5 array.
Then a drive (#4) crapped out, and took down the other drive (#3) on the
same ide bus... I would like to turn the raid back on, but when I do the
raid subsystem thinks its spare.
Q: How can I turn the raid back on in degraded mode without the #3 drive
being used as a spare?
...waiting for some pointers so I can sleep tonight, so any help would be
most appreciated.
ed
p.s.:
I've booted the system in knoppix v3.9 now...
here are the commands:
root@1[~]# mdadm --assemble /dev/md2 -R /dev/hda4 /dev/hdb4 /dev/hdc1
mdadm: failed to RUN_ARRAY /dev/md2: Invalid argument
root@1[~]# mdadm -D /dev/md2
/dev/md2:
Version : 00.90.01
Creation Time : Mon Feb 23 21:13:37 2004
Raid Level : raid5
Device Size : 117185984 (111.76 GiB 120.00 GB)
Raid Devices : 4
Total Devices : 3
Preferred Minor : 2
Persistence : Superblock is persistent
Update Time : Thu Sep 22 14:07:32 2005
State : active, degraded
Active Devices : 2
Working Devices : 3
Failed Devices : 0
Spare Devices : 1
Layout : left-asymmetric
Chunk Size : 64K
Number Major Minor RaidDevice State
0 3 4 0 active sync
1 3 68 1 active sync
2 0 0 - removed
3 0 0 - removed
4 22 1 - spare
root@1[~]# mdadm -S /dev/md2
--
fedora-list mailing list
fedora-list@xxxxxxxxxx
To unsubscribe: https://www.redhat.com/mailman/listinfo/fedora-list