On Tue, 2007-01-30 at 22:20 -0500, Ric Wheeler wrote:
> Mark Lord wrote:
> > The number of retries is an entirely separate issue.
> > If we really care about it, then we should fix SD_MAX_RETRIES.
> >
> > The current value of 5 is *way* too high. It should be zero or one.
> >
> > Cheers
> >
> I think that drives retry enough, we should leave retry at zero for
> normal (non-removable) drives. Should this be a policy we can set like
> we do with NCQ queue depth via /sys ?
I don't disagree that it should be settable. However, retries occur for
other reasons than failures inside the device. The most standard ones
are unit attentions generated because of other activity (target reset
etc). The key to the problem is retrying only operations that are
genuinely retryable, which the mid-layer doesn't do such a good job on.
> We need to be able to layer things like MD on top of normal drive errors
> in a way that will produce a system that provides reasonable response
> time despite any possible IO error on a single component. Another case
> that we end up doing on a regular basis is drive recovery. Errors need
> to be limited in scope to just the impacted area and dispatched up to
> the application layer as quickly as we can so that you don't spend days
> watching a copy of huge drive (think 750GB or more) ;-)
For the MD case, this is what REQ_FAILFAST is for.
James
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [email protected]
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
[Index of Archives]
[Kernel Newbies]
[Netfilter]
[Bugtraq]
[Photo]
[Stuff]
[Gimp]
[Yosemite News]
[MIPS Linux]
[ARM Linux]
[Linux Security]
[Linux RAID]
[Video 4 Linux]
[Linux for the blind]
[Linux Resources]