Re: [patch 1/4] x86: FIFO ticket spinlocks

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, Nov 01, 2007 at 04:01:45PM -0400, Chuck Ebbert wrote:
> On 11/01/2007 10:03 AM, Nick Piggin wrote:
> 
> [edited to show the resulting code]
> 
> > +	__asm__ __volatile__ (
> > +		LOCK_PREFIX "xaddw %w0, %1\n"
> > +		"1:\t"
> > +		"cmpb %h0, %b0\n\t"
> > +		"je 2f\n\t"
> > +		"rep ; nop\n\t"
> > +		"movb %1, %b0\n\t"
> > +		/* don't need lfence here, because loads are in-order */
> >  		"jmp 1b\n"
> > +		"2:"
> > +		:"+Q" (inc), "+m" (lock->slock)
> > +		:
> > +		:"memory", "cc");
> >  }
> 
> If you really thought you might get long queues, you could figure out
> how far back you are and use that to determine how long to wait before
> testing the lock again. That cmpb could become a subb without adding
> overhead to the fast path -- that would give you the queue length (or
> its complement anyway.)

Indeed. You can use this as a really nice input into a backoff
algorithm (eg. if you're next in line, don't back off, or at least
don't go into exponential backoff; if you've got people in front
of you, start throttling harder).

I think I'll leave that to SGI if they come up with a big x86 SSI ;)
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [email protected]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

[Index of Archives]     [Kernel Newbies]     [Netfilter]     [Bugtraq]     [Photo]     [Stuff]     [Gimp]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Video 4 Linux]     [Linux for the blind]     [Linux Resources]
  Powered by Linux