Re: workqueue deadlock

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



* Andrew Morton <[email protected]> wrote:

> > > > not a naked preempt_disable()! The concurrency rules for data 
> > > > structures changed via preempt_disable() are quite hard to sort out 
> > > > after the fact. (preempt_disable() is too opaque,
> > > 
> > > preempt_disable() is the preferred way of holding off cpu hotplug.
> > 
> > well, preempt_disable() is the scheduler's internal mechanism to keep 
> > tasks from being preempted. It is fast but it also has non-nice 
> > side-effect:
> > 
> > 1) nothing tells us what the connection between preempt-disable and data 
> >    structure is. If we let preempt_disable() spread then we'll end up 
> >    with a situation like the BKL: all preempt_disable() sections become 
> >    one big blob of code with hard-to-define specifications, and if we 
> >    take out code from that blob stuff mysteriously breaks.
> 
> Well we can add some suitably-named wrapper around preempt_disable() 
> to make it obvious why we're calling it.  But I haven't noticed any 
> such problem with existing usages.

ok, as long as there's a separate API for it (which just maps to 
disable_preempt(), or whatever), i'm fine with it. The complications 
with preempt_disable()/preempt_enable() and local_irq_disable()/enable() 
come when someone tries to implement something like a fully preemptible 
kernel :-)

it was quite some work to sort the irqs on/off + per-cpu assumptions out 
in the slab allocator and in the page allocator:

$ diffstat patches/rt-slab.patch
 mm/slab.c |  460 +++++++++++++++++++++++++++++++++++++++++------------------------
 1 file changed, 296 insertions(+), 164 deletions(-)

$ diffstat patches/rt-page_alloc.patch
 mm/page_alloc.c |  125 ++++++++++++++++++++++++++++++++++++++++++-----------------
 1 file  changed, 90 insertions(+), 35 deletions(-)

> > 	void cpu_hotplug_lock(void)
> > 	{
> > 		int cpu = raw_smp_processor_id();
> > 		/*
> > 		 * Interrupts/softirqs are hotplug-safe:
> > 		 */
> > 		if (in_interrupt())
> > 			return;
> > 		if (current->hotplug_depth++)
> > 			return;
> > 		current->hotplug_lock = &per_cpu(hotplug_lock, cpu);
> > 		mutex_lock(current->hotplug_lock);
> > 	}
> 
> That's functionally equivalent to what we have now, and it isn't 
> working too well.

hm, i thought the main reason of not using cpu_hotplug_lock() in a 
widespread manner was not related to its functionality but to its 
scalability - but i could be wrong. The one above is scalable and we 
could use it as /the/ method to control CPU hotplug. All the flux i 
remember related to cpu_hotplug_lock() use from the fork path and from 
other scheduler hotpaths related to its scalability bottleneck, not to 
its locking efficiency.

	Ingo
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [email protected]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

[Index of Archives]     [Kernel Newbies]     [Netfilter]     [Bugtraq]     [Photo]     [Stuff]     [Gimp]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Video 4 Linux]     [Linux for the blind]     [Linux Resources]
  Powered by Linux