Re: [PATCH, RFC] reimplement flush_workqueue()

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 01/04, Srivatsa Vaddagiri wrote:
>
> On Mon, Dec 18, 2006 at 01:34:16AM +0300, Oleg Nesterov wrote:
> >  void fastcall flush_workqueue(struct workqueue_struct *wq)
> >  {
> > -	might_sleep();
> > -
> > +	mutex_lock(&workqueue_mutex);
> >  	if (is_single_threaded(wq)) {
> >  		/* Always use first cpu's area. */
> > -		flush_cpu_workqueue(per_cpu_ptr(wq->cpu_wq, singlethread_cpu),
> > -					-1);
> > +		flush_cpu_workqueue(per_cpu_ptr(wq->cpu_wq, singlethread_cpu));
> >  	} else {
> >  		int cpu;
> > 
> > -		mutex_lock(&workqueue_mutex);
> >  		for_each_online_cpu(cpu)
> 
> 
> Can compiler optimizations lead to cpu_online_map being cached in a register 
> while running this loop? AFAICS cpu_online_map is not declared to be
> volatile.

But it is not const either,

>            If it can be cached,

I believe this would be a compiler's bug. Let's take a more simple example,

	while (!condition)
		schedule();

What if compiler will cache the value of global 'condition' ?

                                  then we have the danger of invoking 
> flush_cpu_workqueue() on a dead cpu (because flush_cpu_workqueue drops
> workqueue_mutex, cpu hp events can change cpu_online_map while we are in
> flush_cpu_workqueue).

Oleg.

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [email protected]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

[Index of Archives]     [Kernel Newbies]     [Netfilter]     [Bugtraq]     [Photo]     [Stuff]     [Gimp]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Video 4 Linux]     [Linux for the blind]     [Linux Resources]
  Powered by Linux