Re: [RFC] RT-patch update to remove the global pi_lock

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, 2005-08-22 at 15:19 -0700, Daniel Walker wrote:
> On Mon, 2005-08-22 at 15:44 -0400, Steven Rostedt wrote:
> > On Mon, 2005-08-22 at 20:33 +0200, Ingo Molnar wrote:
> > 
> > > any ideas how to get rid of pi_lock altogether?
> > 
> > I've toyed with the idea of adding another raw_spin_lock to the mutex. A
> > lock specific pi_lock.   Instead of grabbing a global pi_lock, grab the
> > pi_lock of a lock.  To modify any lock w.r.t PI, you must first grab all
> > the lock's pi_locks being referenced.
> 
> Are you saying that you want to convert the current system to lock all
> the pi_locks for all the locks in the sequence?
> 
> It seems like you could make it a per task lock, then only lock the
> task's pi_lock for pi operations.

How would you add to a lock with just holding a lock for a task?  When
you are grabbing a lock, you must first grab a raw lock associated to
the lock being grabbed.  Although, I'm starting to look into this idea,
and I'm going to first see if the current wait_lock could suffice.  I
may also need to add an additional lock to the task to follow the lock
-> task -> lock route.  The tasks order should be the same as the locks
when the are bound (holding) a lock. Since the task won't be able to
release it without holding the raw lock of the lock it is releasing.
(boy this gets confusing to talk about, since you need to talk about
locks and the locking method within the lock!)

> 
> > The idea stems from the fact that the kernel must order its taking of
> > locks to prevent deadlocks.  This way the order of locks that are taken
> > are also always in order. 
> > 
> > So if you have the following case:
> > 
> > P1 blocked_on L1 owned_by P2 blocked_on L2 owned_by P3 ...
> > 
> > The L1, L2, L3 ... must always be in the same order, otherwise the
> > kernel itself can have a deadlock.
> > 
> > OK, let me prove this (for myself as well ;-)
> > 
> > Lets go by contradiction.
> 
> Proof seems straight forward enough. 
> 
> One downside would be an increase in mutex structure size though.

If I do need to add an additional lock to the mutex, I would abstract it
all, so that the old global pi_lock can be used if configured.  This
way, a UP or a low memory 2x SMP machine can still use the old method,
but when it needs to grow, switch over to the new non-global pi_locking
method.  But, maybe I can still get away with just using the wait_lock
and not add any more overhead to the size of the mutex.

I've just started to expiremnent with this idea, so I really don't know
yet how this will work.

-- Steve


-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [email protected]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

[Index of Archives]     [Kernel Newbies]     [Netfilter]     [Bugtraq]     [Photo]     [Gimp]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Video 4 Linux]     [Linux for the blind]
  Powered by Linux