Re: [PATCH] make cancel_rearming_delayed_work() reliable

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello, Oleg.

Oleg Nesterov wrote:
>>> Yes? I need to think more about this.
>> I don't think the test for PENDING should be atomic too.  cwq pointer
>> and VALID is one package.  PENDING lives its own life as a atomic bit
>> switch.
> 
> Yes sure, it should not be atomic. But (VALID && !PENDING) == BUG, so we
> can't just "kill" PENDING form the check above.

We can also mask it out, which is about the same as what we're currently
doing.  :-)

>>>      Memory operations that occur before a LOCK operation may appear to happen
>>>      after it completes.
>> Which means that spin_lock() isn't a write barrier.
> 
> yes, it is not very clear which "Memory operations" memory-barriers.txt
> describes.
> 
>>                                                       lock is read
>> barrier, unlock is write barrier.
> 
> (To avoid a possible confusion: I am not arguing, I am trying to understand,
>  and please also note "in _theory_" above)
> 
> Is it so? Shoudn't we document this if it is true?
> 
>>                                    Otherwise, locking doesn't make much
>> sense.
> 
> Why? Could you please give a code example we have which relies on this?

Let's say there's a shared data structure protected by a spinlock and
two threads are accessing it.

1. thr1 locks spin
2. thr1 updates data structure
3. thr1 unlocks spin
4. thr2 locks spin
5. thr2 accesses data structure
6. thr2 unlocks spin

If spin_unlock is not a write barrier and spin_lock is not a read
barrier, nothing guarantees memory accesses from step#5 will see the
changes made in step#2.  Memory fetch can occur during updates in step#2
or even before that.

In _theory_, if you have, say, partial/blocked memory barrier thing
which acts as a barrier to certain range of memory accesses, in this
case memory accesses made while holding spinlock, they don't have to be
real memory barriers but AFAIK there isn't any.  I don't think it's
practical to worry about such thing at this point.  It's way too
theoretical.  But, yes, it sure needs documentation.

>>     If we're going the barrier way, I think we're better off with
>> explicit smp_wmb().  It's only barrier() on x86/64.
> 
> Yes. But note that we don't have any reason to do set_wq_data() under
> cwq->lock (this is also true for wake_up(->more_work) btw), so it makes
> sense to do this change anyway. And "wmb + spin_lock" looks a bit strange,
> I _suspect_ spin_lock() means a full barrier on most platforms.
> 
> Could you also look at
> 	http://marc.info/?t=116275561700001&r=1
> 
> and, in particular,
> 	http://marc.info/?l=linux-kernel&m=116281136122456

This is because spin_lock() isn't a write barrier, right?  I totally
agree with you there.

Thanks.

-- 
tejun
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [email protected]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

[Index of Archives]     [Kernel Newbies]     [Netfilter]     [Bugtraq]     [Photo]     [Stuff]     [Gimp]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Video 4 Linux]     [Linux for the blind]     [Linux Resources]
  Powered by Linux