Re: FYI: RAID5 unusably unstable through 2.6.14

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Martin Drab wrote:
On Thu, 2 Feb 2006, Bill Davidsen wrote:

Just to state clearly in the first place. I've allready solved the problem by low-level formatting the entire disk that this inconsistent array in question was part of.

So now everything is back to normal. So unforunatelly I would not be able to do any more tests on the device in the non-working state.

I mentioned this problem here now just to let you konw that there is such a problematic Linux behviour (and IMO flawed) in such circumstances, and that perhaps it may let you think of such situations when doing further improvements and development in the design of the block device layer (or wherever the problem may possibly come from).


It looks like the problem is in that controller card and its driver. Was this a proprietary closed source driver? Linux is perfectly happy to access the rest of the disk when some parts of it have gone bad; people do this all the time. It looks like your raid controller decided to take the entire virtual disk that it presents to the kernel offline because it detected errors.

<snip>
The 0,0,0 is the /dev/sda. And even though this is now, after low-level formatting of the previously inconsistent disc, the indications back then were just the same. Which means every indication behaved as usual. Both arrays were properly identified. But when I was accessing the inconsistent one, i.e. /dev/sda, in any way (even just bytes, this has nothing to do with any filesystems) the error messages mentioned above appeared. I'm not sure what exactly was generating them, but I've CC'd Mark Salyzyn, maybe he can explain more to it.


How did you low level format the drive? These days disk manufacturers ship drives already low level formatted and end users can not perform a low level format. The last time I remember being able to low level format a drive was with MFM and RLL drives, prior to IDE. My guess is what you actually did was simply write out zeros to every sector on the disk, which replaced the corrupted data in the effected sector with good data, rendering it repaired. Usually drives will fail reads to bad sectors but when you write to that sector, it will write and read that sector to see if it is fine after being written again, or if the media is bad in which case it will remap the sector to a spare.

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [email protected]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

[Index of Archives]     [Kernel Newbies]     [Netfilter]     [Bugtraq]     [Photo]     [Stuff]     [Gimp]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Video 4 Linux]     [Linux for the blind]     [Linux Resources]
  Powered by Linux