Alan Cox a écrit :
On Fri, 16 Nov 2007 22:53:08 +0100
Eric Dumazet <[email protected]> wrote:
Time has come to change NR_OPEN value, some production servers hit the
not so 'ridiculously high value' of 1024*1024 file descriptors per process.
Why fiddle with the kernel defaults when every distribution can manage
this in user space - including picking defaults by memory size or
platform ?
Please note I am not speaking about the standard 1024 filedescriptors limit,
but the hardcoded 1024*1024 one.
Tell me more how a user space can overcome this limit ?
Dynamically patching kernel text ?
static struct fdtable * alloc_fdtable(unsigned int nr)
{
...
nr /= (1024 / sizeof(struct file *));
nr = roundup_pow_of_two(nr + 1);
nr *= (1024 / sizeof(struct file *));
if (nr > NR_OPEN)
nr = NR_OPEN;
and :
int expand_files(struct files_struct *files, int nr)
{
struct fdtable *fdt;
fdt = files_fdtable(files);
/* Do we need to expand? */
if (nr < fdt->max_fds)
return 0;
/* Can we expand? */
if (nr >= NR_OPEN)
return -EMFILE;
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [email protected]
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
[Index of Archives]
[Kernel Newbies]
[Netfilter]
[Bugtraq]
[Photo]
[Stuff]
[Gimp]
[Yosemite News]
[MIPS Linux]
[ARM Linux]
[Linux Security]
[Linux RAID]
[Video 4 Linux]
[Linux for the blind]
[Linux Resources]