ok. I'll really need to look at "vmstat" output from these. We could
easily make the mutex slowpath behave like ppc64 semaphores, via the
attached (untested) patch, but i really think it's the wrong thing to
do, because it overloads the system with runnable tasks in an
essentially unlimited fashion [== overscheduling] - they'll all contend
for the same single mutex.
in synthetic workloads on idle systems it such overscheduling can help,
because the 'luck factor' of the 'thundering herd' of tasks can generate
a higher total throughput - at the expense of system efficiency. At 8
CPUs i already measured a net performance loss at 3 tasks! So i think
the current 'at most 2 tasks runnable' approach of mutexes is the right
one on a broad range of hardware.
still, i'll try a different patch tomorrow, to keep the number of 'in
flight' tasks within a certain limit (say at 2) - i suspect that would
close the performance gap too, on this test.
The fundamental problem is that there is a relatively major latency to wake
somebody up, and for them to actually run so they can acquire a lock. In an
ideal world there would always be a single waiter running trying to acquire the
lock at the moment it was unlocked and not running until then.
There are better solutions than just madly throwing more waiters in flight on an
unlock. Here's three possibilities off the top of my head:
1) It is possible to have a hybrid lock that spins a single waiting thread and
sleeps waiters 2..n, so there is always a waiter running trying to acquire the
lock. It solves the latency problem if the lock is held a length of time at
least as long as it takes to wake up the next waiter. But the spinning waiter
burns some cpu to buy the decreased latency.
2) You could also do the classic spin for awhile and then sleep method. This
essentially turns low latency locks into spinlocks but still sleeps locks which
are held longer and/or are much more contested.
3) There is the option to look at cpu idleness of the current cpu and spin or
sleep based on that.
4) Accept that we have a cpu efficient high latency lock and use it appropriately.
I'm not saying any of these 4 is what we should do. I'm just trying to say
there are options out there that don't involve thundering hurds and luck to
address the problem.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [email protected]
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
[Index of Archives]
[Kernel Newbies]
[Netfilter]
[Bugtraq]
[Photo]
[Stuff]
[Gimp]
[Yosemite News]
[MIPS Linux]
[ARM Linux]
[Linux Security]
[Linux RAID]
[Video 4 Linux]
[Linux for the blind]
[Linux Resources]