On Sun, Mar 19, 2006 at 10:53:46AM -0500, Jon Smirl wrote:
> In another thread someone made a mention that this problem may have
> something to do with the pools of memory being used for sendfile. The
> readahead from sendfile is going into a moderately sized pool. When
> you get 100 of them going at once the other threads flush the
> readahead data out of the pool before it can be used and thus trigger
> the thrashing seek storm. Is this true, that sendfile data is read
> ahead into a fixed sized pool? If so, the readahead algorithms would
The pages are kept in a cache pool which is made of all the free memory.
E.g. the following command shows a system with 331M cache pool:
% free -m
total used free shared buffers cached
Mem: 488 482 5 0 7 331
-/+ buffers/cache: 142 345
Swap: 127 0 127
That would be more than enough for the stock read-ahead to handle 100
concurrent readers.
> need to reduce the sendfile window sizes to stop the pool from
> thrashing.
Sure, it is the desired behavior. This patch provides exactly this feature :)
Cheers,
Wu
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [email protected]
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
[Index of Archives]
[Kernel Newbies]
[Netfilter]
[Bugtraq]
[Photo]
[Stuff]
[Gimp]
[Yosemite News]
[MIPS Linux]
[ARM Linux]
[Linux Security]
[Linux RAID]
[Video 4 Linux]
[Linux for the blind]
[Linux Resources]