Re: queued spinlock code and results

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Nick Piggin <[email protected]> writes:

> I made some tests of the queued spinlock code using userspace test code on
> 64-bit processors. I believe the xadd based code no longer has any theoretical
> memory ordering problems.

Linus, the background of this is that on 8 socket Opteron systems
the current spinlocks can become very unfair to the point of severe 
starvation. These boxes are becomming more common.

> The threaded results also attempt to have an unfairness count, which is the
> max number of times in a row that a lock is acquired,  when all other threads
> are also executing in the loop -- the reason xadd for example is not always
> 0 there is because the other threads may not have reached the lock before
> the current thread was able to get it several times (eg. if an interrupt
> comes in, this could happen).

Interesting. I was also thinking about switching the lock types
at boot time. Since all the lock calls are out of line this would
be reasonably easy.

I would say the main drawback of switchable and queued locks 
would be also that they require a larger spinlock_t thus increasing
cache usage

e.g. it would probably hurt for the large spinlock tables used 
by TCP, but then those should be fixed anyways to be smaller.

-Andi

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [email protected]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

[Index of Archives]     [Kernel Newbies]     [Netfilter]     [Bugtraq]     [Photo]     [Stuff]     [Gimp]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Video 4 Linux]     [Linux for the blind]     [Linux Resources]
  Powered by Linux