Folks, I'm now using Fedora Core 6 and I'd dearly like to use an existing RAID0 that was created when the system was running SuSE 10.0. In fact, SuSE is still installed so I can compare both systems regarding how they use the RAID0. Now it looks like there are many things which are similar. The output of dmraid with several switches such as '-r', '-l' are the same in FC6. There's also a md0 device in FC6. There are even in FC6 dome mapper devices like: crw------- 1 root root 10, 63 mar 19 16:18 control brw-rw---- 1 root disk 253, 0 mar 19 16:18 nvidia_ffcdjdfc brw-rw---- 1 root disk 253, 1 mar 19 16:18 nvidia_ffcdjdfcp1 So far so good. I think. I looked at the FC6 rc.sysinit bootscript a bit to see how it setups a RAID and after making some script tests around the resolve_dm_name() and get_numeric_dev() functions I found out that the following command was executed in the case of this system: /sbin/dmraid -ay -i -p nvidia_ffcdjdfc Moreover the output of that FC6 command is looking very positive: "RAID set "nvidia_ffcdjdfc" already active" But, whereas I use the following under SuSE to mount the single-partition RAID0, it fails on FC6 (I've created the /raid mount point in FC6): mount -t ext3 /dev/md0 /raid/ I've also tried some variations such as: mount -t ext3 /dev/mapper/nvidia_ffcdjdfcp1 /raid/ mount /dev/mapper/nvidia_ffcdjdfcp1 /raid/ But they all fail with: "mount: wrong fs type, bad option, bad superblock [...]" It really looks like both systems are perceiving the RAID0 in a very similar way. What could be missing in the procedure I use in FC6 that prevents from actually mounting the RAID0 ? I don't know much about RAIDs and I'm careful to not try anything that would damage the actual stored RAID0 data. Any comments/suggestions welcomed - Thanks ! Al