On Thu, Feb 01, 2007 at 01:03:09AM +0100, Peter Zijlstra wrote:
> On Wed, 2007-01-31 at 15:32 -0800, Paul E. McKenney wrote:
>
> > The wakeup in barrier_sync() would mean that the counter was zero
> > at some point in the past. The counter would then be rechecked, and
> > if it were still zero, barrier_sync() would invoke finish_wait() and
> > then return -- but the counter might well become non-zero in the
> > meantime, right?
> >
> > So given that barrier_sync() is permitted to return after the counter
> > becomes non-zero, why can't it just rely on the fact that barrier_unlock()
> > saw it as zero not long in the past?
> >
> > > > It looks like barrier_sync() is more a
> > > > rw semaphore biased to readers.
> > >
> > > Indeed, the locked sections are designed to be the rare case.
> >
> > OK -- but barrier_sync() just waits for readers, it doesn't exclude them.
> >
> > If all barrier_sync() needs to do is to wait until all pre-existing
> > barrier_lock()/barrier_unlock() pairs to complete, it seems to me to
> > be compatible with qrcu's semantics.
> >
> > So what am I missing here?
>
> I might be the one missing stuff, I'll have a hard look at qrcu.
>
> The intent was that barrier_sync() would not write to memory when there
> are no active locked sections, so that the cacheline can stay shared,
> thus keeping is fast.
>
> If qrcu does exactly this, then yes we have a match.
QRCU as currently written (http://lkml.org/lkml/2006/11/29/330) doesn't
do what you want, as it acquires the lock unconditionally. I am proposing
that synchronize_qrcu() change to something like the following:
void synchronize_qrcu(struct qrcu_struct *qp)
{
int idx;
smp_mb();
if (atomic_read(qp->ctr[0]) + atomic_read(qp->ctr[1]) <= 1) {
smp_rmb();
if (atomic_read(qp->ctr[0]) +
atomic_read(qp->ctr[1]) <= 1)
goto out;
}
mutex_lock(&qp->mutex);
idx = qp->completed & 0x1;
atomic_inc(qp->ctr + (idx ^ 0x1));
/* Reduce the likelihood that qrcu_read_lock() will loop */
smp_mb__after_atomic_inc();
qp->completed++;
atomic_dec(qp->ctr + idx);
__wait_event(qp->wq, !atomic_read(qp->ctr + idx));
mutex_unlock(&qp->mutex);
out:
smp_mb();
}
For the first "if" to give a false positive, a concurrent switch had
to have happened. For example, qp->ctr[0] was zero and qp->ctr[1]
was two at the time of the first atomic_read(), but then qp->completed
switched so that both qp->ctr[0] and qp->ctr[1] were one at the time
of the second atomic_read. The only way the second "if" can give us a
false positive is if there was another change to qp->completed in the
meantime -- but that means that all of the pre-existing qrcu_read_lock()
holders must have gotten done, otherwise the second switch could not
have happened. Yes, you do incur three memory barriers on the fast
path, but the best you could hope for with your approach was two of them
(unless I am confused about how you were using barrier_sync()).
Oleg, does this look safe?
Ugly at best, I know, but I do very much sympathize with Christoph's
desire to keep the number of synchronization primitives down to a
dull roar. ;-)
Thanx, Paul
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [email protected]
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
[Index of Archives]
[Kernel Newbies]
[Netfilter]
[Bugtraq]
[Photo]
[Stuff]
[Gimp]
[Yosemite News]
[MIPS Linux]
[ARM Linux]
[Linux Security]
[Linux RAID]
[Video 4 Linux]
[Linux for the blind]
[Linux Resources]