On Thu, Nov 29, 2007 at 03:39:20PM +0100, Ingo Molnar wrote:
>
> * Ingo Molnar <[email protected]> wrote:
>
> > i'm getting this on 32-bit (with the kmap-atomic debugging patch
> > applied):
> >
> > ---------------->
> > Calling initcall 0x78b67c00: tipc_init+0x0/0xc0()
> > TIPC: Activated (version 1.6.2 compiled Nov 29 2007 15:04:36)
> > WARNING: at arch/x86/mm/highmem_32.c:52 kmap_atomic_prot()
> > Pid: 1, comm: swapper Not tainted 2.6.24-rc3-cfs-v24 #45
> > [<78107272>] show_trace_log_lvl+0x12/0x40
> > [<781072ad>] show_trace+0xd/0x20
> > [<781086f8>] dump_stack+0x58/0x60
> > [<7811541f>] kmap_atomic_prot+0x1bf/0x240
> > [<781154ae>] kmap_atomic+0xe/0x20
> > [<78157be5>] get_page_from_freelist+0x225/0x420
> > [<78157e4d>] __alloc_pages+0x6d/0x3a0
> > [<78169a7b>] slob_new_page+0x1b/0x60
> > [<78169be4>] slob_alloc+0x124/0x1e0
> > [<78169e0f>] __kmalloc_node+0x6f/0xa0
> > [<7884f7c2>] reg_init+0x42/0x80
> > [<7884f80a>] tipc_reg_start+0xa/0x40
> > [<7883e6c6>] tipc_core_start+0x66/0xc0
> > [<78b67c81>] tipc_init+0x81/0xc0
>
> this is due to the kzalloc() here:
>
> 0x7884f1d0 is in reg_init (net/tipc/user_reg.c:88).
> 83 spin_lock_bh(®_lock);
> 84 if (!users) {
> 85 users = kzalloc(USER_LIST_SIZE, GFP_ATOMIC);
> 86 if (users) {
> 87 for (i = 1; i <= MAX_USERID; i++) {
> 88 users[i].next = i - 1;
>
> which does a:
>
> 120 static inline void clear_highpage(struct page *page)
> 121 {
> 122 void *kaddr = kmap_atomic(page, KM_USER0);
> 123 clear_page(kaddr);
> 124 kunmap_atomic(kaddr, KM_USER0);
> 125 }
>
> but ... why does the debug code think it's in softirq context?
>
> plus, and this is a slob question i guess, how come we drop into
> clear_highpage() for a kzalloc()??
Good question. Looks like kzalloc switched from doing a memset to
passing a GFP_ZERO flag down to kmalloc. Slob didn't get completely
updated to reflect this, so it blindly propagates the flag onto
__alloc_pages and does a harmless double-clear.
Someone should remind us what the point of moving the kzalloc memset
down into the allocators was. We now have all three allocators doing:
if (unlikely((flags & __GFP_ZERO) && ptr))
memset(ptr, 0, obj_size(cachep));
and needing to mask flags before passing them to page allocators,
which hardly seems better than kzalloc unconditionally doing the
memset. Wouldn't it be better/faster/smaller to make kzalloc a
non-inline?
Slob also has a nice second path for large kmallocs where it just
calls the page allocator directly which also needs this treatment.
Which does the right thing with non-highmem systems, but can hit this
bug otherwise.
Below is a totally untested patch. Alternately, we could simply tweak
clear_highpage to remove the limitation, but that would leave slob
doing an extra clear.
diff -r c60016ba6237 mm/slob.c
--- a/mm/slob.c Tue Nov 13 09:09:36 2007 -0800
+++ b/mm/slob.c Thu Nov 29 15:15:54 2007 -0600
@@ -223,6 +231,7 @@ static void *slob_new_page(gfp_t gfp, in
{
void *page;
+ gfp &= ~__GFP_ZERO;
#ifdef CONFIG_NUMA
if (node != -1)
page = alloc_pages_node(node, gfp, order);
@@ -457,6 +470,8 @@ void *__kmalloc_node(size_t size, gfp_t
page = virt_to_page(ret);
page->private = size;
}
+ if (unlikely((gfp & __GFP_ZERO) && ret))
+ memset(ret, 0, size);
return ret;
}
}
--
Mathematics is the supreme nostalgia of our time.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [email protected]
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
[Index of Archives]
[Kernel Newbies]
[Netfilter]
[Bugtraq]
[Photo]
[Stuff]
[Gimp]
[Yosemite News]
[MIPS Linux]
[ARM Linux]
[Linux Security]
[Linux RAID]
[Video 4 Linux]
[Linux for the blind]
[Linux Resources]