On Fri, 5 Oct 2007, Jens Axboe wrote:
> It might not, it might. The point is trying to isolate the problem and
> making a simple test case that could be used to reproduce it, so that
> Christoph (or someone else) can easily fix it.
In case there is someone who wants to hack on it: Here is what I got so
far for batching the frees. I will try to come up with a test next week if
nothing else happens before:
Patch 1/2 on top of mm:
SLUB: Keep counter of remaining objects on the per cpu list
Add a counter to keep track of how many objects are on the per cpu list.
Signed-off-by: Christoph Lameter <[email protected]>
---
include/linux/slub_def.h | 1 +
mm/slub.c | 8 ++++++--
2 files changed, 7 insertions(+), 2 deletions(-)
Index: linux-2.6.23-rc8-mm2/include/linux/slub_def.h
===================================================================
--- linux-2.6.23-rc8-mm2.orig/include/linux/slub_def.h 2007-10-04 22:41:58.000000000 -0700
+++ linux-2.6.23-rc8-mm2/include/linux/slub_def.h 2007-10-04 22:42:08.000000000 -0700
@@ -15,6 +15,7 @@ struct kmem_cache_cpu {
void **freelist;
struct page *page;
int node;
+ int remaining;
unsigned int offset;
unsigned int objsize;
};
Index: linux-2.6.23-rc8-mm2/mm/slub.c
===================================================================
--- linux-2.6.23-rc8-mm2.orig/mm/slub.c 2007-10-04 22:41:58.000000000 -0700
+++ linux-2.6.23-rc8-mm2/mm/slub.c 2007-10-04 22:42:08.000000000 -0700
@@ -1386,12 +1386,13 @@ static void deactivate_slab(struct kmem_
* because both freelists are empty. So this is unlikely
* to occur.
*/
- while (unlikely(c->freelist)) {
+ while (unlikely(c->remaining)) {
void **object;
/* Retrieve object from cpu_freelist */
object = c->freelist;
c->freelist = c->freelist[c->offset];
+ c->remaining--;
/* And put onto the regular freelist */
object[c->offset] = page->freelist;
@@ -1491,6 +1492,7 @@ load_freelist:
object = c->page->freelist;
c->freelist = object[c->offset];
+ c->remaining = s->objects - c->page->inuse - 1;
c->page->inuse = s->objects;
c->page->freelist = NULL;
c->node = page_to_nid(c->page);
@@ -1574,13 +1576,14 @@ static void __always_inline *slab_alloc(
local_irq_save(flags);
c = get_cpu_slab(s, smp_processor_id());
- if (unlikely(!c->freelist || !node_match(c, node)))
+ if (unlikely(!c->remaining || !node_match(c, node)))
object = __slab_alloc(s, gfpflags, node, addr, c);
else {
object = c->freelist;
c->freelist = object[c->offset];
+ c->remaining--;
}
local_irq_restore(flags);
@@ -1686,6 +1689,7 @@ static void __always_inline slab_free(st
if (likely(page == c->page && c->node >= 0)) {
object[c->offset] = c->freelist;
c->freelist = object;
+ c->remaining++;
} else
__slab_free(s, page, x, addr, c->offset);
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [email protected]
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
[Index of Archives]
[Kernel Newbies]
[Netfilter]
[Bugtraq]
[Photo]
[Stuff]
[Gimp]
[Yosemite News]
[MIPS Linux]
[ARM Linux]
[Linux Security]
[Linux RAID]
[Video 4 Linux]
[Linux for the blind]
[Linux Resources]