On Tue, 27 Dec 2005, Ingo Molnar wrote:
>
> * Arjan van de Ven <[email protected]> wrote:
>
> > > + * 1) if the exclusive store fails we fail, and
> > > + *
> > > + * 2) if the decremented value is not zero we don't even attempt the store.
> >
> >
> > btw I really think that 1) is wrong. trylock should do everything it
> > can to get the semaphore short of sleeping. Just because some
> > cacheline got written to (which might even be shared!) in the middle
> > of the atomic op is not a good enough reason to fail the trylock imho.
> > Going into the slowpath.. fine. But here it's a quality of
> > implementation issue; you COULD get the semaphore without sleeping (at
> > least probably, you'd have to retry to know for sure) but because
> > something wrote to the same cacheline as the lock... no. that's just
> > not good enough.. sorry.
>
> point. I solved this in my tree by calling the generic trylock <fn> if
> there's an __ex_flag failure in the ARMv6 case. Should be rare (and thus
> the call is under unlikely()), and should thus still enable the fast
> implementation.
I'd solve it like this instead (on top of your latest patches):
Index: linux-2.6/include/asm-arm/mutex.h
===================================================================
--- linux-2.6.orig/include/asm-arm/mutex.h
+++ linux-2.6/include/asm-arm/mutex.h
@@ -110,12 +110,7 @@ do { \
/*
* For __mutex_fastpath_trylock we use another construct which could be
- * described as an "incomplete atomic decrement" or a "single value cmpxchg"
- * since it has two modes of failure:
- *
- * 1) if the exclusive store fails we fail, and
- *
- * 2) if the decremented value is not zero we don't even attempt the store.
+ * described as a "single value cmpxchg".
*
* This provides the needed trylock semantics like cmpxchg would, but it is
* lighter and less generic than a true cmpxchg implementation.
@@ -123,27 +118,22 @@ do { \
static inline int
__mutex_fastpath_trylock(atomic_t *count, int (*fn_name)(atomic_t *))
{
- int __ex_flag, __res;
+ int __ex_flag, __res, __orig;
__asm__ (
- "ldrex %0, [%2] \n"
- "subs %0, %0, #1 \n"
- "strexeq %1, %0, [%2] \n"
+ "1: ldrex %0, [%3] \n"
+ "subs %1, %0, #1 \n"
+ "strexeq %2, %1, [%3] \n"
+ "movlt %0, #0 \n"
+ "cmpeq %2, #0 \n"
+ "bgt 1b \n"
- : "=&r" (__res), "=&r" (__ex_flag)
+ : "=&r" (__orig), "=&r" (__res), "=&r" (__ex_flag)
: "r" (&count->counter)
: "cc", "memory" );
- /*
- * We must not return a synthetic 'failure' if the conditional
- * did not succeed - drop back into the generic slowpath if
- * this happens (should be rare):
- */
- if (unlikely(__ex_flag))
- return fn_name(count);
-
- return __res == 0;
+ return __orig;
}
#endif
Nicolas
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [email protected]
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
[Index of Archives]
[Kernel Newbies]
[Netfilter]
[Bugtraq]
[Photo]
[Stuff]
[Gimp]
[Yosemite News]
[MIPS Linux]
[ARM Linux]
[Linux Security]
[Linux RAID]
[Video 4 Linux]
[Linux for the blind]
[Linux Resources]