Dipankar Sarma a écrit :
On Mon, Oct 17, 2005 at 11:10:04AM +0200, Eric Dumazet wrote:
Fixing the 'file count' wont fix the real problem : Batch freeing is good
but should be limited so that not more than *billions* of file struct are
queued for deletion.
Agreed. It is not designed to work that way, so there must be
a bug somewhere and I am trying to track it down. It could very well
be that at maxbatch=10 we are just queueing at a rate far too high
compared to processing.
I can freeze my test machine with a program that 'only' use dentries, no files.
No message, no panic, but machine becomes totally unresponsive after few seconds.
Just greping for call_rcu in kernel sources gave me another call_rcu() use
from syscalls. And yes 2.6.13 has the same problem.
Here is the killer on by HT Xeon machine (2GB ram)
Eric
#include <unistd.h>
#include <sys/types.h>
#include <fcntl.h>
#include <stdio.h>
#include <stdlib.h>
#include <errno.h>
#include <sys/stat.h>
int main(void)
{
int i, rc;
struct stat st;
char name[1024];
memset(name, 'a', sizeof(name));
for (i = 0; i < 1000000000;i++) {
sprintf(name + 220, "%d", i);
rc = stat(name, &st);
if (rc == -1 && errno != ENOENT) {
perror(name);
}
}
return 0;
}
[Index of Archives]
[Kernel Newbies]
[Netfilter]
[Bugtraq]
[Photo]
[Stuff]
[Gimp]
[Yosemite News]
[MIPS Linux]
[ARM Linux]
[Linux Security]
[Linux RAID]
[Video 4 Linux]
[Linux for the blind]
[Linux Resources]