Re: [PATCH 1b/7] dlm: core locking

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, Apr 25, 2005 at 02:54:58PM -0700, Steven Dake wrote:
> On Mon, 2005-04-25 at 09:58, David Teigland wrote:
> > The core dlm functions.  Processes dlm_lock() and dlm_unlock() requests.
> > Creates lockspaces which give applications separate contexts/namespaces in
> > which to do their locking.  Manages locks on resources' grant/convert/wait
> > queues.  Sends and receives high level locking operations between nodes.
> > Delivers completion and blocking callbacks (ast's) to lock holders.
> > Manages the distributed directory that tracks the current master node for
> > each resource.
> > 
> 
> David
> 
> Very positive there are some submissions relating to cluster kernel work
> for lkml to review..  good job..
> 
> I have some questions on the implementation:
> 
> It appears as though a particular processor is identified as the "lock
> master" or processor that maintains the state of the lock.  So for
> example, if a processor wants to acquire a lock, it sends a reqeust to
> the lock master which either grants or rejects the request for the
> lock.  What happens in the scenario that a lock master leaves the
> current configuration?  This scneario is very likely in practice.  

Of course, every time a node fails.

> How do you synchronize the membership events that occur with the kernel
> to kernel communication that takes place using SCTP?

SCTP isn't much different than TCP, so I'm not sure how that's relevant.
It's used primarily so we can take advantage of multi-homing when you have
redundant networks.

When the membership of a lockspace needs to change, whether adding or
removing a node, activity is suspended in that lockspace on all the nodes
using it.  After all are suspended, the lockspace is then told (on all
lockspace members) what the new membership is.  Recovery then takes place:
new masters are selected and waiting requests redirected.


> It appears from your patches there is some external (userland)
> application that maintains the current list of processors that qualify
> as "lock servers".  

correct

> Is there then a dependence on external membership algorithms?

We simply require that the membership system is in agreement before the
lockspace is told what the new members are.  The membership system
ultimately drives the lockspace membership and we can't have the
membership system on different nodes telling the dlm different stories
about who's in/out.

So, yes, the membership system ultimately needs to follow some algorithm
that guarantees agreement.  There are rigorous, distributed ways of doing
that (your evs work which I look forward to using), and simpler methods,
e.g.  driving it from some single point of control.


> What user application today works to configure the dlm services in the
> posted patch?

I've been using the command line program "dlm_tool" where I act as the
membership system myself.  We're just putting together pieces that will
drive this from a membership system (like openais).  Again, the pieces you
decide to use in userspace are flexible and depend on how you want to use
the dlm.


> With usage of SCTP protocol, there is now some idea of moving the
> protocol for cluster communication into the kernel and using SCTP as
> that protocol...

Neither SCTP nor the dlm are about cluster communication, they're both
about simple point-to-point messages.  When you move up to userspace and
start talking about membership, then the issue of group communication
models comes up and your openais/evs work is very relevant.  Might you be
misled about what SCTP does?

Dave

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [email protected]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

[Index of Archives]     [Kernel Newbies]     [Netfilter]     [Bugtraq]     [Photo]     [Stuff]     [Gimp]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Video 4 Linux]     [Linux for the blind]     [Linux Resources]
  Powered by Linux