Mark Lord wrote:
Eric D. Mudama wrote:
Actually, it's possibly worse, since each failure in libata will
generate 3-4 retries. With existing ATA error recovery in the
drives, that's about 3 seconds per retry on average, or 12 seconds
per failure. Multiply that by the number of blocks past the error to
complete the request..
It really beats the alternative of a forced reboot
due to, say, superblock I/O failing because it happened
to get merged with an unrelated I/O which then failed..
Etc..
Definitely an improvement.
The number of retries is an entirely separate issue.
If we really care about it, then we should fix SD_MAX_RETRIES.
The current value of 5 is *way* too high. It should be zero or one.
Cheers
I think that drives retry enough, we should leave retry at zero for
normal (non-removable) drives. Should this be a policy we can set like
we do with NCQ queue depth via /sys ?
We need to be able to layer things like MD on top of normal drive errors
in a way that will produce a system that provides reasonable response
time despite any possible IO error on a single component. Another case
that we end up doing on a regular basis is drive recovery. Errors need
to be limited in scope to just the impacted area and dispatched up to
the application layer as quickly as we can so that you don't spend days
watching a copy of huge drive (think 750GB or more) ;-)
ric
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [email protected]
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
[Index of Archives]
[Kernel Newbies]
[Netfilter]
[Bugtraq]
[Photo]
[Stuff]
[Gimp]
[Yosemite News]
[MIPS Linux]
[ARM Linux]
[Linux Security]
[Linux RAID]
[Video 4 Linux]
[Linux for the blind]
[Linux Resources]