lockdep: how to tell it multiple pte locks is OK?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I'm writing some code which is doing some batch processing on pte pages,
and so wants to hold multiple pte locks at once.  This seems OK, but
lockdep is giving me the warning:

=============================================
[ INFO: possible recursive locking detected ]
2.6.23-rc9-paravirt #1673
---------------------------------------------
init/1 is trying to acquire lock:
 (__pte_lockptr(new)){--..}, at: [<c0102d85>] lock_pte+0x10/0x15

but task is already holding lock:
 (__pte_lockptr(new)){--..}, at: [<c0102d85>] lock_pte+0x10/0x15

other info that might help us debug this:
4 locks held by init/1:
 #0:  (&mm->mmap_sem){----}, at: [<c012999e>] copy_process+0xab4/0x12bf
 #1:  (&mm->mmap_sem/1){--..}, at: [<c01299ae>] copy_process+0xac4/0x12bf
 #2:  (&mm->page_table_lock){--..}, at: [<c010334a>] xen_dup_mmap+0x11/0x24
 #3:  (__pte_lockptr(new)){--..}, at: [<c0102d85>] lock_pte+0x10/0x15

stack backtrace:
 [<c0109282>] show_trace_log_lvl+0x1a/0x2f
 [<c0109d18>] show_trace+0x12/0x14
 [<c0109d30>] dump_stack+0x16/0x18
 [<c0147bd0>] __lock_acquire+0x195/0xc5f
 [<c0148722>] lock_acquire+0x88/0xac
 [<c035c2a3>] _spin_lock+0x35/0x42
 [<c0102d85>] lock_pte+0x10/0x15
 [<c010347d>] pin_page+0x67/0x17e
 [<c0102d23>] pgd_walk+0x168/0x1ba
 [<c0103283>] xen_pgd_pin+0x42/0xf8
 [<c0103352>] xen_dup_mmap+0x19/0x24
 [<c0129b63>] copy_process+0xc79/0x12bf
 [<c012a419>] do_fork+0x99/0x1bf
 [<c0106216>] sys_clone+0x33/0x39
 [<c010814e>] syscall_call+0x7/0xb
 =======================


I presume this is because I'm holding multiple pte locks (class
"__pte_lockptr(new)").  Is there some way I can tell lockdep this is OK?

I'm presume I'm the first person to try holding multiple pte locks at
once, so there's no existing locking order for these locks.  I'm always
traversing and locking the pagetable in virtual address order (and this
seems like a sane-enough rule for anyone else who wants to hold multiple
pte locks).

Thanks,
    J
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [email protected]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

[Index of Archives]     [Kernel Newbies]     [Netfilter]     [Bugtraq]     [Photo]     [Stuff]     [Gimp]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Video 4 Linux]     [Linux for the blind]     [Linux Resources]
  Powered by Linux