Re: Kernel panic: Route cache, RCU, possibly FIB trie.

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




On Tue, 21 Mar 2006, Robert Olsson wrote:

Jesper Dangaard Brouer writes:

> I have tried to track down the problem, and I think I have narrowed it
> a bit down.  My theory is that it is related to the route cache
> (ip_dst_cache) or FIB, which cannot dealloacate route cache slab
> elements (maybe RCU related).  (I have seen my route cache increase to
> around 520k entries using rtstat, before dying).
>
> I'm using the FIB trie system/algorithm (CONFIG_IP_FIB_TRIE). Think
> that the error might be cause by the "fib_trie" code.  See the syslog,
> output below.

> Syslog#1 (indicating a problem with the fib trie)
> --------
> Mar 20 18:00:04 hostname kernel: Debug: sleeping function called from invalid context at mm/slab.c:2472
> Mar 20 18:00:04 hostname kernel: in_atomic():1, irqs_disabled():0
> Mar 20 18:00:04 hostname kernel:  [<c0103d9f>] dump_stack+0x1e/0x22
> Mar 20 18:00:04 hostname kernel:  [<c011cbe1>] __might_sleep+0xa6/0xae
> Mar 20 18:00:04 hostname kernel:  [<c014f3e9>] __kmalloc+0xd9/0xf3
> Mar 20 18:00:04 hostname kernel:  [<c014f5a4>] kzalloc+0x23/0x50
> Mar 20 18:00:04 hostname kernel:  [<c030ecd1>] tnode_alloc+0x3c/0x82
> Mar 20 18:00:04 hostname kernel:  [<c030edf6>] tnode_new+0x26/0x91
> Mar 20 18:00:04 hostname kernel:  [<c030f757>] halve+0x43/0x31d
> Mar 20 18:00:04 hostname kernel:  [<c030f090>] resize+0x118/0x27e

Hello!

Out of memory?
One of the crashed was caused by out of memory, but all the memory was allocated through slab. More specifically to ip_dst_cache.

Running BGP with full routing?
No, running OSPF with around 760 subnets.

And large number of flows.
Yes, very large number of flows.

Whats your normal number of entries route cache?
On this machine, rigth now, between 14000 to 60000 entries in the route cache. On other machines, rigth now, I have a max of 151560 entries.

And how much memory do you have?
On this machine 1Gb memory (and 4 others), most of the machines have 2Gb.


From your report problems seems to related to flushing either rt_cache_flush
or fib_flush (before there was dev_close()?) so all associated entries should
freed. All the entries are freed via RCU which due to the deferred delete
can give a very high transient memory pressure. If we believe it's memory problem
we can try something out...

There is definitly high memory pressure on this machine!
Slab memory usage, range from 39Mb to 205Mb (at the moment on the production servers).

Hilsen
  Jesper Brouer

--
-------------------------------------------------------------------
Cand. scient datalog
Dept. of Computer Science, University of Copenhagen
-------------------------------------------------------------------
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [email protected]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

[Index of Archives]     [Kernel Newbies]     [Netfilter]     [Bugtraq]     [Photo]     [Stuff]     [Gimp]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Video 4 Linux]     [Linux for the blind]     [Linux Resources]
  Powered by Linux