Re: pthread_mutex_unlock (was Re: sched_yield() makes OpenLDAP slow)

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Howard Chu wrote:
Nick Piggin wrote:
Howard Chu wrote:


And again in this case, A should not be immediately reacquiring the lock if it doesn't actually need it.


No, not immediately, I said "for a very long time". As in: A does not
need the exclusion provided by the lock for a very long time so it
drops it to avoid needless contention, then reaquires it when it finally
does need the lock.


OK. I think this is really a separate situation. Just to recap: A takes lock, does some work, releases lock, a very long time passes, then A takes the lock again. In the "time passes" part, that mutex could be locked and unlocked any number of times by other threads and A won't know or care. Particularly on an SMP machine, other threads that were blocked on that mutex could do useful work in the interim without impacting A's progress at all. So here, when A leaves the mutex unlocked for a long time, it's desirable to give the mutex to one of the waiters ASAP.


But how do you quantify "a long time"? And what happens if process A is
a very high priority and which nothing else is allowed to run?

Just accept that my described scenario is legitimate then consider it in
isolation rather than getting caught up in the superfluous details of how
such a situation might come about.


OK. I'm not trying to be difficult here. In much of life, context is everything; very little can be understood in isolation.


OK, but other valid examples were offered up - lock inversion avoidance,
and externally driven systems (ie. where it is not known which lock will
be taken next).

Back to the scenario:

A realtime system with tasks A and B, A has an RT scheduling priority of
1, and B is 2. A and B are both runnable, so A is running. A takes a mutex
then sleeps, B runs and ends up blocked on the mutex. A wakes up and at
some point it drops the mutex and then tries to take it again.

What happens?


As I understand the spec, A must block because B has acquired the mutex. Once again, the SUS discussion of priority inheritance would never need to have been written if this were not the case:

 >>>
In a priority-driven environment, a direct use of traditional primitives like mutexes and condition variables can lead to unbounded priority inversion, where a higher priority thread can be blocked by a lower priority thread, or set of threads, for an unbounded duration of time. As a result, it becomes impossible to guarantee thread deadlines. Priority inversion can be bounded and minimized by the use of priority inheritance protocols. This allows thread deadlines to be guaranteed even in the presence of synchronization requirements.
<<<

The very first sentence indicates that a higher priority thread can be blocked by a lower priority thread. If your interpretation of the spec were correct, then such an instance would never occur. Since your

Wrong. It will obviously occur if the lower priority process is able
to take a lock before a higher priority process.

The situation will not exist in "the scenario" though, if we follow
my reading of the spec, because *the scheduler* determines the next
process to gain the mutex. This makes perfect sense to me.

scenario is using realtime threads, then we can assume that the Priority Ceiling feature is present and you can use it if needed. ( http://www.opengroup.org/onlinepubs/000095399/xrat/xsh_chap02.html#tag_03_02_09_06 Realtime Threads option group )


Any kind of priority boost / inherentance like this is orthogonal to
the issue. They still do not prevent B from acquiring the mutex and
thereby blocking the execution of the higher priority A. I think this
is against the spirit of the spec, especially the part where it says
*the scheduler* will choose which process to gain the lock.

--
SUSE Labs, Novell Inc.
Send instant messages to your online friends http://au.messenger.yahoo.com -
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [email protected]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

[Index of Archives]     [Kernel Newbies]     [Netfilter]     [Bugtraq]     [Photo]     [Stuff]     [Gimp]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Video 4 Linux]     [Linux for the blind]     [Linux Resources]
  Powered by Linux