Ingo Molnar wrote:
>
> +__mutex_lock_common(struct mutex *lock, struct mutex_waiter *waiter,
> + struct thread_info *ti,
> + unsigned long task_state __IP_DECL__)
> +{
> + struct task_struct *task = ti->task;
> + unsigned int old_val;
> +
> + /*
> + * Lets try to take the lock again - this is needed even if
> + * we get here for the first time (shortly after failing to
> + * acquire the lock), to make sure that we get a wakeup once
> + * it's unlocked. Later on this is the operation that gives
> + * us the lock. If there are other waiters we need to xchg it
> + * to -1, so that when we release the lock, we properly wake
> + * up the other waiters:
> + */
> + old_val = atomic_xchg(&lock->count, -1);
> +
> + if (unlikely(old_val == 1)) {
> + /*
> + * Got the lock - rejoice! But there's one small
> + * detail to fix up: above we have set the lock to -1,
> + * unconditionally. But what if there are no waiters?
> + * While it would work with -1 too, 0 is a better value
> + * in that case, because we wont hit the slowpath when
> + * we release the lock. We can simply use atomic_set()
> + * for this, because we are the owners of the lock now,
> + * and are still holding the wait_lock:
> + */
> + if (likely(list_empty(&lock->wait_list)))
> + atomic_set(&lock->count, 0);
This is a minor issue, but still I think it makes sense to optimize
for uncontended case:
old_val = atomic_xchg(&lock->count, 0); // no sleepers
if (old_val == 1) {
// sleepers ?
if (!list_empty(&lock->wait_list))
// need to wakeup them
atomic_set(&lock->count, -1);
...
}
Oleg.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [email protected]
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
[Index of Archives]
[Kernel Newbies]
[Netfilter]
[Bugtraq]
[Photo]
[Stuff]
[Gimp]
[Yosemite News]
[MIPS Linux]
[ARM Linux]
[Linux Security]
[Linux RAID]
[Video 4 Linux]
[Linux for the blind]
[Linux Resources]