Re: [BUG] long freezes on thinkpad t60

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Linus Torvalds a écrit :

On Thu, 21 Jun 2007, Linus Torvalds wrote:
We don't do nesting locking either, for exactly the same reason. Are nesting locks "easier"? Absolutely. They are also almost always a sign of a *bug*. So making spinlocks and/or mutexes nest by default is just a way to encourage bad programming!

Side note, and as a "truth in advertising" section: I'll have to admit that I argued against fair semaphores on the same grounds. I was wrong then (and eventually admitted it, and we obviously try to make our mutexes and semaphores fair these days!), and maybe I'm wrong now.

If somebody can actually come up with a sequence where we have spinlock starvation, and it's not about an example of bad locking, and nobody really can come up with any other way to fix it, we may eventually have to add the notion of "fair spinlocks".


I tried to find such a sequence, but I think its more a matter of hardware evolution, and some degenerated cases.

In some years (months ?), it might possible to starve say the file struct spinlock of a process in a open()/close() infinite loop. This because the number of instruction per 'memory cache line transfert between cpus/core' is raising.

But then one can say its a bug in user code :)

Another way to starve kernel might be a loop doing settime() , since seqlock are quite special in serialization :

Only seqlock's writers perform atomic ops, readers could be starved because of some hardware 'optimization'.


So my arguments are purely pragmatic. It's not that I hate fairness per se. I dislike it only when it's used to "solve" (aka hide) other problems.

In the end, some situations do need fairness, and the fact that aiming for fairness is often harder, slower, and more complicated than not doing so at that point turns into a non-argument. If you need it, you need it.

Maybe some *big* NUMA machines really want this fairness (even if it cost some cycles as pointed by Davide in http://lkml.org/lkml/2007/3/29/246 ) , I am just guessing since I cannot test such monsters. I tested Davide program on a Dual Opteron and got some perf difference.


$ ./qspins  -n 2
now testing: TICKLOCK
timeres=4000
uscycles=1991
AVG[0]: 2195.250000 cycles/loop
SIG[0]: 11.813657
AVG[1]: 2212.312500 cycles/loop
SIG[1]: 38.038991

$ ./qspins  -n 2 -s
now testing: SPINLOCK
timeres=4000
uscycles=1991
AVG[0]: 2066.000000 cycles/loop
SIG[0]: 0.000000
AVG[1]: 2115.687500 cycles/loop
SIG[1]: 63.083000



I just don't think we need it, and we're better off solving problems other ways.

(For example, we might also solve such problems by creating a separate
"fair_spin_lock" abstraction, and only making the particular users that need it actually use it. It would depend a bit on whether the cost of implementing the fairness is noticeable enough for it to be worth having a separate construct for it).

		Linus



-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [email protected]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

[Index of Archives]     [Kernel Newbies]     [Netfilter]     [Bugtraq]     [Photo]     [Stuff]     [Gimp]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Video 4 Linux]     [Linux for the blind]     [Linux Resources]
  Powered by Linux