Re: kfree(0) - ok?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, 15 Aug 2007 05:12:41 +0530 (IST)
Satyam Sharma <[email protected]> wrote:

> [PATCH] {slub, slob}: use unlikely() for kfree(ZERO_OR_NULL_PTR) check
> 
> Considering kfree(NULL) would normally occur only in error paths and
> kfree(ZERO_SIZE_PTR) is uncommon as well, so let's use unlikely() for
> the condition check in SLUB's and SLOB's kfree() to optimize for the
> common case. SLAB has this already.

I went through my current versions of slab/slub/slub and came up with this:

diff -puN mm/slob.c~slub-slob-use-unlikely-for-kfreezero_or_null_ptr-check mm/slob.c
--- a/mm/slob.c~slub-slob-use-unlikely-for-kfreezero_or_null_ptr-check
+++ a/mm/slob.c
@@ -360,7 +360,7 @@ static void slob_free(void *block, int s
 	slobidx_t units;
 	unsigned long flags;
 
-	if (ZERO_OR_NULL_PTR(block))
+	if (unlikely(ZERO_OR_NULL_PTR(block)))
 		return;
 	BUG_ON(!size);
 
@@ -466,7 +466,7 @@ void kfree(const void *block)
 {
 	struct slob_page *sp;
 
-	if (ZERO_OR_NULL_PTR(block))
+	if (unlikely(ZERO_OR_NULL_PTR(block)))
 		return;
 
 	sp = (struct slob_page *)virt_to_page(block);
@@ -484,7 +484,7 @@ size_t ksize(const void *block)
 {
 	struct slob_page *sp;
 
-	if (ZERO_OR_NULL_PTR(block))
+	if (unlikely(ZERO_OR_NULL_PTR(block)))
 		return 0;
 
 	sp = (struct slob_page *)virt_to_page(block);
diff -puN mm/slub.c~slub-slob-use-unlikely-for-kfreezero_or_null_ptr-check mm/slub.c
--- a/mm/slub.c~slub-slob-use-unlikely-for-kfreezero_or_null_ptr-check
+++ a/mm/slub.c
@@ -2434,7 +2434,7 @@ size_t ksize(const void *object)
 	struct page *page;
 	struct kmem_cache *s;
 
-	if (ZERO_OR_NULL_PTR(object))
+	if (unlikely(ZERO_OR_NULL_PTR(object)))
 		return 0;
 
 	page = get_object_page(object);
@@ -2468,7 +2468,7 @@ void kfree(const void *x)
 {
 	struct page *page;
 
-	if (ZERO_OR_NULL_PTR(x))
+	if (unlikely(ZERO_OR_NULL_PTR(x)))
 		return;
 
 	page = virt_to_head_page(x);
@@ -2785,7 +2785,7 @@ void *__kmalloc_track_caller(size_t size
 							get_order(size));
 	s = get_slab(size, gfpflags);
 
-	if (ZERO_OR_NULL_PTR(s))
+	if (unlikely(ZERO_OR_NULL_PTR(s)))
 		return s;
 
 	return slab_alloc(s, gfpflags, -1, caller);
@@ -2801,7 +2801,7 @@ void *__kmalloc_node_track_caller(size_t
 							get_order(size));
 	s = get_slab(size, gfpflags);
 
-	if (ZERO_OR_NULL_PTR(s))
+	if (unlikely(ZERO_OR_NULL_PTR(s)))
 		return s;
 
 	return slab_alloc(s, gfpflags, node, caller);
_

Which is getting pretty idiotic:

akpm:/usr/src/25> grep ZERO_OR_NULL_PTR */*.c
mm/slab.c:              BUG_ON(ZERO_OR_NULL_PTR(cachep->slabp_cache));
mm/slab.c:      if (unlikely(ZERO_OR_NULL_PTR(cachep)))
mm/slab.c:      if (unlikely(ZERO_OR_NULL_PTR(cachep)))
mm/slab.c:      if (unlikely(ZERO_OR_NULL_PTR(objp)))
mm/slab.c:      if (unlikely(ZERO_OR_NULL_PTR(objp)))
mm/slob.c:      if (unlikely(ZERO_OR_NULL_PTR(block)))
mm/slob.c:      if (unlikely(ZERO_OR_NULL_PTR(block)))
mm/slob.c:      if (unlikely(ZERO_OR_NULL_PTR(block)))
mm/slub.c:      if (unlikely(ZERO_OR_NULL_PTR(s)))
mm/slub.c:      if (unlikely(ZERO_OR_NULL_PTR(s)))
mm/slub.c:      if (unlikely(ZERO_OR_NULL_PTR(object)))
mm/slub.c:      if (unlikely(ZERO_OR_NULL_PTR(x)))
mm/slub.c:      if (unlikely(ZERO_OR_NULL_PTR(s)))
mm/slub.c:      if (unlikely(ZERO_OR_NULL_PTR(s)))

are we seeing a pattern here?  We could stick the unlikely inside
ZERO_OR_NULL_PTR() itself.  That's a little bit sleazy though - there might
be future callsites at which it is likely, who knows?

I guess we can stick with the idiotic patch ;)
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [email protected]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

[Index of Archives]     [Kernel Newbies]     [Netfilter]     [Bugtraq]     [Photo]     [Stuff]     [Gimp]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Video 4 Linux]     [Linux for the blind]     [Linux Resources]
  Powered by Linux