Re: [PATCH] per-task-predictive-write-throttling-1

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, Sep 15, 2005 at 10:04:38AM +0400, Nikita Danilov wrote:
> Why is that useful? Don't we want (in general) to cache as many dirty

It allows all tasks to have a chance to take advantage of the writeback
cache, not only the first one like now.

> pages as possible in the hope that some of them will be re-dirtied (thus
> avoiding additional write) or truncated (avoiding write altogether)?

We could add an option to increase the per-task future_dirty pages only
if we had to allocate a new pagecache (i.e. cache miss), that
would leave the rewrite behaviour unchanged, but still if we've a huge
write-hog, we should reserve some cache for other people anyway, so I'm
unsure if we should allow rewriter-hogs to use half the ram and leave
nothing to the other potential small writers. Certainly accounting for
future_dirty only on cache-misses on the radix-tree would still catch
the bad untarring and cp beahviour just fine, so that might be good
enough and I would certainly agree that's more conservative approach.

Initially every task has access to the whole dirty cache, even "cp
/dev/zero ." initially allocates half the ram, but then if it keeps
writing that fast, the dirty cache decreases, to give a chance to other
tasks to avoid blocking while writing.

If the write-hog becomes reasoanable and it slowdown writing, it can
then use more dirty cache again.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [email protected]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

[Index of Archives]     [Kernel Newbies]     [Netfilter]     [Bugtraq]     [Photo]     [Gimp]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Video 4 Linux]     [Linux for the blind]
  Powered by Linux