On Fri, 2007-01-19 at 09:20 -0800, Christoph Lameter wrote:
> On Fri, 19 Jan 2007, Peter Zijlstra wrote:
>
> > + /*
> > + * NFS congestion size, scale with available memory.
> > + *
>
> Well this all depends on the memory available to the running process.
> If the process is just allowed to allocate from a subset of memory
> (cpusets) then this may need to be lower.
>
> > + * 64MB: 8192k
> > + * 128MB: 11585k
> > + * 256MB: 16384k
> > + * 512MB: 23170k
> > + * 1GB: 32768k
> > + * 2GB: 46340k
> > + * 4GB: 65536k
> > + * 8GB: 92681k
> > + * 16GB: 131072k
>
> Hmmm... lets say we have the worst case of an 8TB IA64 system with 1k
> nodes of 8G each.
Eeuh, right. Glad to have you around to remind how puny my boxens
are :-)
> On Ia64 the number of pages is 8TB/16KB pagesize = 512
> million pages. Thus nfs_congestion_size is 724064 pages which is
> 11.1Gbytes?
>
> If we now restrict a cpuset to a single node then have a
> nfs_congestion_size of 11.1G vs an available memory on a node of 8G.
Right, perhaps cap this to a max of 256M. That would allow 128 2M RPC
transfers, much more would not be needed I guess. Trond?
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [email protected]
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
[Index of Archives]
[Kernel Newbies]
[Netfilter]
[Bugtraq]
[Photo]
[Stuff]
[Gimp]
[Yosemite News]
[MIPS Linux]
[ARM Linux]
[Linux Security]
[Linux RAID]
[Video 4 Linux]
[Linux for the blind]
[Linux Resources]