On Mon, Oct 17, 2005 at 02:10:09PM +0200, Eric Dumazet wrote:
> Dipankar Sarma a écrit :
> >On Mon, Oct 17, 2005 at 11:10:04AM +0200, Eric Dumazet wrote:
> >
> >Agreed. It is not designed to work that way, so there must be
> >a bug somewhere and I am trying to track it down. It could very well
> >be that at maxbatch=10 we are just queueing at a rate far too high
> >compared to processing.
> >
>
> I can freeze my test machine with a program that 'only' use dentries, no
> files.
>
> No message, no panic, but machine becomes totally unresponsive after few
> seconds.
>
> Just greping for call_rcu in kernel sources gave me another call_rcu() use
> from syscalls. And yes 2.6.13 has the same problem.
Can you try it with rcupdate.maxbatch set to 10000 in boot
command line ?
FWIW, the open/close test problem goes away if I set maxbatch to
10000. I had introduced this limit some time ago to curtail
the effect long running softirq handlers have on scheduling
latencies, which now conflicts with OOM avoidance requirements.
Thanks
Dipankar
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [email protected]
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
[Index of Archives]
[Kernel Newbies]
[Netfilter]
[Bugtraq]
[Photo]
[Stuff]
[Gimp]
[Yosemite News]
[MIPS Linux]
[ARM Linux]
[Linux Security]
[Linux RAID]
[Video 4 Linux]
[Linux for the blind]
[Linux Resources]