On 11/01/2007 10:03 AM, Nick Piggin wrote:
[edited to show the resulting code]
> + __asm__ __volatile__ (
> + LOCK_PREFIX "xaddw %w0, %1\n"
> + "1:\t"
> + "cmpb %h0, %b0\n\t"
> + "je 2f\n\t"
> + "rep ; nop\n\t"
> + "movb %1, %b0\n\t"
> + /* don't need lfence here, because loads are in-order */
> "jmp 1b\n"
> + "2:"
> + :"+Q" (inc), "+m" (lock->slock)
> + :
> + :"memory", "cc");
> }
If you really thought you might get long queues, you could figure out
how far back you are and use that to determine how long to wait before
testing the lock again. That cmpb could become a subb without adding
overhead to the fast path -- that would give you the queue length (or
its complement anyway.)
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [email protected]
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
[Index of Archives]
[Kernel Newbies]
[Netfilter]
[Bugtraq]
[Photo]
[Stuff]
[Gimp]
[Yosemite News]
[MIPS Linux]
[ARM Linux]
[Linux Security]
[Linux RAID]
[Video 4 Linux]
[Linux for the blind]
[Linux Resources]