Andrew Morton a écrit :
Eric Dumazet <[email protected]> wrote:
Hi Andrew
It seems latest kernels have a problem in kmem_cache_destroy()
Mainline, or just -mm?
Mainline and mm it seems, only if NUMA is ON, on 2.6.17 only (2.6.16.XX
is OK)
I added some printks and it seems slab_partials is not empty in
__node_shrink() for node 0
I am not a mm/slab.c expert but the following patch cures the problem
for me :
[PATCH] slab : NUMA may need __cache_shrink() doing two loops.
Signed-off-by: Eric Dumazet <[email protected]>
--- linux-2.6.17-rc4/mm/slab.c 2006-05-15 14:58:09.000000000 +0200
+++ linux-2.6.17-rc4-ed/mm/slab.c 2006-05-15 15:23:07.000000000 +0200
@@ -2225,25 +2225,32 @@
spin_lock_irq(&l3->list_lock);
}
ret = !list_empty(&l3->slabs_full) || !list_empty(&l3->slabs_partial);
+ if (ret)
+ printk(KERN_ERR "__node_shrink(name=%s,node=%d) ret=%d\n",
+ cachep->name, node, ret);
return ret;
}
static int __cache_shrink(struct kmem_cache *cachep)
{
- int ret = 0, i = 0;
+ int ret, i;
struct kmem_list3 *l3;
+ int loopcount = 0;
- drain_cpu_caches(cachep);
+ do {
+ ret = 0;
+ drain_cpu_caches(cachep);
- check_irq_on();
- for_each_online_node(i) {
- l3 = cachep->nodelists[i];
- if (l3) {
- spin_lock_irq(&l3->list_lock);
- ret += __node_shrink(cachep, i);
- spin_unlock_irq(&l3->list_lock);
+ check_irq_on();
+ for_each_online_node(i) {
+ l3 = cachep->nodelists[i];
+ if (l3) {
+ spin_lock_irq(&l3->list_lock);
+ ret += __node_shrink(cachep, i);
+ spin_unlock_irq(&l3->list_lock);
+ }
}
- }
+ } while (ret && ++loopcount < 2);
return (ret ? 1 : 0);
}
[Index of Archives]
[Kernel Newbies]
[Netfilter]
[Bugtraq]
[Photo]
[Stuff]
[Gimp]
[Yosemite News]
[MIPS Linux]
[ARM Linux]
[Linux Security]
[Linux RAID]
[Video 4 Linux]
[Linux for the blind]
[Linux Resources]